May 9 00:16:55.997233 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:21:52 -00 2025 May 9 00:16:55.997265 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e6c4805303143bfaf51e786bee05d9a5466809f675df313b1f69aaa84c2d4ce May 9 00:16:55.997277 kernel: BIOS-provided physical RAM map: May 9 00:16:55.997283 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 9 00:16:55.997289 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 9 00:16:55.997296 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 9 00:16:55.997303 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 9 00:16:55.997310 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 9 00:16:55.997316 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 9 00:16:55.997322 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 9 00:16:55.997332 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 9 00:16:55.997338 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 9 00:16:55.997347 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 9 00:16:55.997353 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 9 00:16:55.997364 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 9 00:16:55.997371 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 9 00:16:55.997381 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 9 00:16:55.997388 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 9 00:16:55.997394 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 9 00:16:55.997401 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 9 00:16:55.997408 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 9 00:16:55.997415 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 9 00:16:55.997422 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 9 00:16:55.997429 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 9 00:16:55.997467 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 9 00:16:55.997475 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 9 00:16:55.997482 kernel: NX (Execute Disable) protection: active May 9 00:16:55.997492 kernel: APIC: Static calls initialized May 9 00:16:55.997499 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 9 00:16:55.997506 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 9 00:16:55.997512 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 9 00:16:55.997519 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 9 00:16:55.997526 kernel: extended physical RAM map: May 9 00:16:55.997533 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 9 00:16:55.997540 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 9 00:16:55.997546 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 9 00:16:55.997553 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 9 00:16:55.997560 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 9 00:16:55.997570 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 9 00:16:55.997577 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 9 00:16:55.997588 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 9 00:16:55.997596 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 9 00:16:55.997603 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 9 00:16:55.997610 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 9 00:16:55.997617 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 9 00:16:55.997630 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 9 00:16:55.997637 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 9 00:16:55.997644 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 9 00:16:55.997651 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 9 00:16:55.997658 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 9 00:16:55.997666 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 9 00:16:55.997673 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 9 00:16:55.997680 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 9 00:16:55.997687 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 9 00:16:55.997697 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 9 00:16:55.997704 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 9 00:16:55.997712 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 9 00:16:55.997719 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 9 00:16:55.997736 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 9 00:16:55.997743 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 9 00:16:55.997750 kernel: efi: EFI v2.7 by EDK II May 9 00:16:55.997758 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 9 00:16:55.997765 kernel: random: crng init done May 9 00:16:55.997773 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 9 00:16:55.997780 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 9 00:16:55.997792 kernel: secureboot: Secure boot disabled May 9 00:16:55.997800 kernel: SMBIOS 2.8 present. May 9 00:16:55.997807 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 9 00:16:55.997814 kernel: Hypervisor detected: KVM May 9 00:16:55.997821 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 00:16:55.997829 kernel: kvm-clock: using sched offset of 4000019802 cycles May 9 00:16:55.997837 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 00:16:55.997844 kernel: tsc: Detected 2794.748 MHz processor May 9 00:16:55.997852 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 00:16:55.997859 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 00:16:55.997870 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 9 00:16:55.997877 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 9 00:16:55.997885 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 00:16:55.997892 kernel: Using GB pages for direct mapping May 9 00:16:55.997899 kernel: ACPI: Early table checksum verification disabled May 9 00:16:55.997907 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 9 00:16:55.997915 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 9 00:16:55.997922 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:16:55.997929 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:16:55.997940 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 9 00:16:55.997948 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:16:55.997955 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:16:55.997962 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:16:55.997970 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:16:55.997977 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 9 00:16:55.997984 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 9 00:16:55.997992 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 9 00:16:55.997999 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 9 00:16:55.998009 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 9 00:16:55.998016 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 9 00:16:55.998024 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 9 00:16:55.998031 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 9 00:16:55.998038 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 9 00:16:55.998045 kernel: No NUMA configuration found May 9 00:16:55.998053 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 9 00:16:55.998060 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 9 00:16:55.998067 kernel: Zone ranges: May 9 00:16:55.998078 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 00:16:55.998085 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 9 00:16:55.998092 kernel: Normal empty May 9 00:16:55.998102 kernel: Movable zone start for each node May 9 00:16:55.998109 kernel: Early memory node ranges May 9 00:16:55.998116 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 9 00:16:55.998124 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 9 00:16:55.998131 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 9 00:16:55.998139 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 9 00:16:55.998146 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 9 00:16:55.998156 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 9 00:16:55.998163 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 9 00:16:55.998170 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 9 00:16:55.998178 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 9 00:16:55.998185 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:16:55.998192 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 9 00:16:55.998208 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 9 00:16:55.998218 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:16:55.998226 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 9 00:16:55.998234 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 9 00:16:55.998241 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 9 00:16:55.998251 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 9 00:16:55.998262 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 9 00:16:55.998269 kernel: ACPI: PM-Timer IO Port: 0x608 May 9 00:16:55.998277 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 00:16:55.998285 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 9 00:16:55.998293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 9 00:16:55.998304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 00:16:55.998312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 00:16:55.998319 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 00:16:55.998327 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 00:16:55.998336 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 00:16:55.998345 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 9 00:16:55.998353 kernel: TSC deadline timer available May 9 00:16:55.998361 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 9 00:16:55.998371 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 00:16:55.998382 kernel: kvm-guest: KVM setup pv remote TLB flush May 9 00:16:55.998392 kernel: kvm-guest: setup PV sched yield May 9 00:16:55.998402 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 9 00:16:55.998413 kernel: Booting paravirtualized kernel on KVM May 9 00:16:55.998423 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 00:16:55.998434 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 9 00:16:55.998466 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 9 00:16:55.998474 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 9 00:16:55.998482 kernel: pcpu-alloc: [0] 0 1 2 3 May 9 00:16:55.998493 kernel: kvm-guest: PV spinlocks enabled May 9 00:16:55.998501 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 9 00:16:55.998510 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e6c4805303143bfaf51e786bee05d9a5466809f675df313b1f69aaa84c2d4ce May 9 00:16:55.998518 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:16:55.998526 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:16:55.998537 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:16:55.998544 kernel: Fallback order for Node 0: 0 May 9 00:16:55.998552 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 9 00:16:55.998563 kernel: Policy zone: DMA32 May 9 00:16:55.998570 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:16:55.998579 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2295K rwdata, 22752K rodata, 43000K init, 2192K bss, 175776K reserved, 0K cma-reserved) May 9 00:16:55.998587 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:16:55.998594 kernel: ftrace: allocating 37946 entries in 149 pages May 9 00:16:55.998602 kernel: ftrace: allocated 149 pages with 4 groups May 9 00:16:55.998610 kernel: Dynamic Preempt: voluntary May 9 00:16:55.998618 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:16:55.998626 kernel: rcu: RCU event tracing is enabled. May 9 00:16:55.998637 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:16:55.998645 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:16:55.998653 kernel: Rude variant of Tasks RCU enabled. May 9 00:16:55.998661 kernel: Tracing variant of Tasks RCU enabled. May 9 00:16:55.998669 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:16:55.998677 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:16:55.998684 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 9 00:16:55.998692 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:16:55.998700 kernel: Console: colour dummy device 80x25 May 9 00:16:55.998710 kernel: printk: console [ttyS0] enabled May 9 00:16:55.998718 kernel: ACPI: Core revision 20230628 May 9 00:16:55.998726 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 9 00:16:55.998741 kernel: APIC: Switch to symmetric I/O mode setup May 9 00:16:55.998749 kernel: x2apic enabled May 9 00:16:55.998757 kernel: APIC: Switched APIC routing to: physical x2apic May 9 00:16:55.998767 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 9 00:16:55.998775 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 9 00:16:55.998783 kernel: kvm-guest: setup PV IPIs May 9 00:16:55.998793 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 9 00:16:55.998801 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 9 00:16:55.998810 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 9 00:16:55.998820 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 9 00:16:55.998830 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 9 00:16:55.998841 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 9 00:16:55.998851 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 00:16:55.998859 kernel: Spectre V2 : Mitigation: Retpolines May 9 00:16:55.998867 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 00:16:55.998877 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 9 00:16:55.998885 kernel: RETBleed: Mitigation: untrained return thunk May 9 00:16:55.998893 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 9 00:16:55.998901 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 9 00:16:55.998908 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 9 00:16:55.998917 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 9 00:16:55.998925 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 9 00:16:55.998935 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 9 00:16:55.998946 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 9 00:16:55.998954 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 9 00:16:55.998961 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 9 00:16:55.998969 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 9 00:16:55.998977 kernel: Freeing SMP alternatives memory: 32K May 9 00:16:55.998984 kernel: pid_max: default: 32768 minimum: 301 May 9 00:16:55.998992 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:16:55.999000 kernel: landlock: Up and running. May 9 00:16:55.999009 kernel: SELinux: Initializing. May 9 00:16:55.999019 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:16:55.999027 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:16:55.999035 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 9 00:16:55.999042 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:16:55.999050 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:16:55.999058 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:16:55.999066 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 9 00:16:55.999073 kernel: ... version: 0 May 9 00:16:55.999081 kernel: ... bit width: 48 May 9 00:16:55.999091 kernel: ... generic registers: 6 May 9 00:16:55.999099 kernel: ... value mask: 0000ffffffffffff May 9 00:16:55.999106 kernel: ... max period: 00007fffffffffff May 9 00:16:55.999114 kernel: ... fixed-purpose events: 0 May 9 00:16:55.999122 kernel: ... event mask: 000000000000003f May 9 00:16:55.999136 kernel: signal: max sigframe size: 1776 May 9 00:16:55.999151 kernel: rcu: Hierarchical SRCU implementation. May 9 00:16:55.999162 kernel: rcu: Max phase no-delay instances is 400. May 9 00:16:55.999172 kernel: smp: Bringing up secondary CPUs ... May 9 00:16:55.999185 kernel: smpboot: x86: Booting SMP configuration: May 9 00:16:55.999193 kernel: .... node #0, CPUs: #1 #2 #3 May 9 00:16:55.999201 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:16:55.999208 kernel: smpboot: Max logical packages: 1 May 9 00:16:55.999216 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 9 00:16:55.999224 kernel: devtmpfs: initialized May 9 00:16:55.999233 kernel: x86/mm: Memory block size: 128MB May 9 00:16:55.999244 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 9 00:16:55.999257 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 9 00:16:55.999272 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 9 00:16:55.999280 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 9 00:16:55.999288 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 9 00:16:55.999295 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 9 00:16:55.999303 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:16:55.999311 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:16:55.999320 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:16:55.999331 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:16:55.999341 kernel: audit: initializing netlink subsys (disabled) May 9 00:16:55.999355 kernel: audit: type=2000 audit(1746749814.968:1): state=initialized audit_enabled=0 res=1 May 9 00:16:55.999366 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:16:55.999375 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 00:16:55.999384 kernel: cpuidle: using governor menu May 9 00:16:55.999392 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:16:55.999399 kernel: dca service started, version 1.12.1 May 9 00:16:55.999410 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 9 00:16:55.999420 kernel: PCI: Using configuration type 1 for base access May 9 00:16:55.999431 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 00:16:55.999473 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:16:55.999483 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:16:55.999494 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:16:55.999504 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:16:55.999514 kernel: ACPI: Added _OSI(Module Device) May 9 00:16:55.999525 kernel: ACPI: Added _OSI(Processor Device) May 9 00:16:55.999534 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:16:55.999543 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:16:55.999553 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:16:55.999566 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 00:16:55.999587 kernel: ACPI: Interpreter enabled May 9 00:16:55.999597 kernel: ACPI: PM: (supports S0 S3 S5) May 9 00:16:55.999607 kernel: ACPI: Using IOAPIC for interrupt routing May 9 00:16:55.999618 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 00:16:55.999628 kernel: PCI: Using E820 reservations for host bridge windows May 9 00:16:55.999638 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 9 00:16:55.999647 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:16:56.000025 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:16:56.000209 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 9 00:16:56.000384 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 9 00:16:56.000403 kernel: PCI host bridge to bus 0000:00 May 9 00:16:56.000608 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 00:16:56.000785 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 00:16:56.000926 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 00:16:56.001070 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 9 00:16:56.001214 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 9 00:16:56.001336 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 9 00:16:56.001478 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:16:56.001759 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 9 00:16:56.001927 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 9 00:16:56.002059 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 9 00:16:56.002229 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 9 00:16:56.002374 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 9 00:16:56.002553 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 9 00:16:56.002703 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 00:16:56.002864 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:16:56.002993 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 9 00:16:56.003121 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 9 00:16:56.003256 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 9 00:16:56.003457 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 9 00:16:56.003756 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 9 00:16:56.003965 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 9 00:16:56.004126 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 9 00:16:56.004748 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 9 00:16:56.004901 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 9 00:16:56.005047 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 9 00:16:56.005208 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 9 00:16:56.005388 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 9 00:16:56.005644 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 9 00:16:56.005839 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 9 00:16:56.006402 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 9 00:16:56.006630 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 9 00:16:56.006801 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 9 00:16:56.006983 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 9 00:16:56.007139 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 9 00:16:56.007154 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 00:16:56.007165 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 00:16:56.007175 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 00:16:56.007185 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 00:16:56.007202 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 9 00:16:56.007212 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 9 00:16:56.007222 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 9 00:16:56.007233 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 9 00:16:56.007243 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 9 00:16:56.007253 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 9 00:16:56.007263 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 9 00:16:56.007274 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 9 00:16:56.007288 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 9 00:16:56.007298 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 9 00:16:56.007308 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 9 00:16:56.007318 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 9 00:16:56.007328 kernel: iommu: Default domain type: Translated May 9 00:16:56.007338 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 00:16:56.007349 kernel: efivars: Registered efivars operations May 9 00:16:56.007359 kernel: PCI: Using ACPI for IRQ routing May 9 00:16:56.007370 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 00:16:56.007388 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 9 00:16:56.007404 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 9 00:16:56.007414 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 9 00:16:56.007424 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 9 00:16:56.007530 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 9 00:16:56.007545 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 9 00:16:56.007555 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 9 00:16:56.007565 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 9 00:16:56.007741 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 9 00:16:56.007904 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 9 00:16:56.008059 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 00:16:56.008074 kernel: vgaarb: loaded May 9 00:16:56.008085 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 9 00:16:56.008095 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 9 00:16:56.008105 kernel: clocksource: Switched to clocksource kvm-clock May 9 00:16:56.008115 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:16:56.008126 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:16:56.008137 kernel: pnp: PnP ACPI init May 9 00:16:56.008341 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 9 00:16:56.008358 kernel: pnp: PnP ACPI: found 6 devices May 9 00:16:56.008369 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 00:16:56.008379 kernel: NET: Registered PF_INET protocol family May 9 00:16:56.008390 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:16:56.008426 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:16:56.008456 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:16:56.008467 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:16:56.008481 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:16:56.008492 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:16:56.008502 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:16:56.008512 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:16:56.008522 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:16:56.008532 kernel: NET: Registered PF_XDP protocol family May 9 00:16:56.008765 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 9 00:16:56.008933 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 9 00:16:56.009079 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 00:16:56.009218 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 00:16:56.009367 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 00:16:56.009539 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 9 00:16:56.009697 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 9 00:16:56.009865 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 9 00:16:56.009882 kernel: PCI: CLS 0 bytes, default 64 May 9 00:16:56.009893 kernel: Initialise system trusted keyrings May 9 00:16:56.009910 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:16:56.009921 kernel: Key type asymmetric registered May 9 00:16:56.009932 kernel: Asymmetric key parser 'x509' registered May 9 00:16:56.009942 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 00:16:56.009953 kernel: io scheduler mq-deadline registered May 9 00:16:56.009964 kernel: io scheduler kyber registered May 9 00:16:56.009974 kernel: io scheduler bfq registered May 9 00:16:56.009985 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 00:16:56.009996 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 9 00:16:56.010007 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 9 00:16:56.010022 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 9 00:16:56.010036 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:16:56.010047 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 00:16:56.010058 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 00:16:56.010068 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 00:16:56.010082 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 00:16:56.010252 kernel: rtc_cmos 00:04: RTC can wake from S4 May 9 00:16:56.010268 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 00:16:56.010410 kernel: rtc_cmos 00:04: registered as rtc0 May 9 00:16:56.010578 kernel: rtc_cmos 00:04: setting system clock to 2025-05-09T00:16:55 UTC (1746749815) May 9 00:16:56.010724 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 9 00:16:56.010749 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 9 00:16:56.010760 kernel: efifb: probing for efifb May 9 00:16:56.010776 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 9 00:16:56.010787 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 9 00:16:56.010798 kernel: efifb: scrolling: redraw May 9 00:16:56.010809 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 9 00:16:56.010819 kernel: Console: switching to colour frame buffer device 160x50 May 9 00:16:56.010830 kernel: fb0: EFI VGA frame buffer device May 9 00:16:56.010840 kernel: pstore: Using crash dump compression: deflate May 9 00:16:56.010851 kernel: pstore: Registered efi_pstore as persistent store backend May 9 00:16:56.010861 kernel: NET: Registered PF_INET6 protocol family May 9 00:16:56.010876 kernel: Segment Routing with IPv6 May 9 00:16:56.010886 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:16:56.010897 kernel: NET: Registered PF_PACKET protocol family May 9 00:16:56.010918 kernel: Key type dns_resolver registered May 9 00:16:56.010935 kernel: IPI shorthand broadcast: enabled May 9 00:16:56.010947 kernel: sched_clock: Marking stable (1402003359, 246285201)->(1813704474, -165415914) May 9 00:16:56.010958 kernel: registered taskstats version 1 May 9 00:16:56.010970 kernel: Loading compiled-in X.509 certificates May 9 00:16:56.010981 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: eadd5f695247828f81e51397e7264f8efd327b51' May 9 00:16:56.010997 kernel: Key type .fscrypt registered May 9 00:16:56.011007 kernel: Key type fscrypt-provisioning registered May 9 00:16:56.011018 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:16:56.011029 kernel: ima: Allocated hash algorithm: sha1 May 9 00:16:56.011040 kernel: ima: No architecture policies found May 9 00:16:56.011050 kernel: clk: Disabling unused clocks May 9 00:16:56.011061 kernel: Freeing unused kernel image (initmem) memory: 43000K May 9 00:16:56.011072 kernel: Write protecting the kernel read-only data: 36864k May 9 00:16:56.011089 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 9 00:16:56.011103 kernel: Run /init as init process May 9 00:16:56.011114 kernel: with arguments: May 9 00:16:56.011125 kernel: /init May 9 00:16:56.011136 kernel: with environment: May 9 00:16:56.011146 kernel: HOME=/ May 9 00:16:56.011157 kernel: TERM=linux May 9 00:16:56.011168 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:16:56.011182 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:16:56.011199 systemd[1]: Detected virtualization kvm. May 9 00:16:56.011210 systemd[1]: Detected architecture x86-64. May 9 00:16:56.011221 systemd[1]: Running in initrd. May 9 00:16:56.011232 systemd[1]: No hostname configured, using default hostname. May 9 00:16:56.011243 systemd[1]: Hostname set to . May 9 00:16:56.011255 systemd[1]: Initializing machine ID from VM UUID. May 9 00:16:56.011266 systemd[1]: Queued start job for default target initrd.target. May 9 00:16:56.011278 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:16:56.011293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:16:56.011305 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:16:56.011317 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:16:56.011329 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:16:56.011341 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:16:56.011355 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:16:56.011369 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:16:56.011385 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:16:56.011399 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:16:56.011411 systemd[1]: Reached target paths.target - Path Units. May 9 00:16:56.011423 systemd[1]: Reached target slices.target - Slice Units. May 9 00:16:56.011500 systemd[1]: Reached target swap.target - Swaps. May 9 00:16:56.011515 systemd[1]: Reached target timers.target - Timer Units. May 9 00:16:56.011526 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:16:56.011538 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:16:56.011554 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:16:56.011565 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:16:56.011576 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:16:56.011588 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:16:56.011599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:16:56.011611 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:16:56.011625 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:16:56.011637 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:16:56.011648 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:16:56.011663 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:16:56.011675 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:16:56.011687 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:16:56.011698 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:16:56.011710 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:16:56.011721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:16:56.011742 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:16:56.011758 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:16:56.011803 systemd-journald[195]: Collecting audit messages is disabled. May 9 00:16:56.011833 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:16:56.011845 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:16:56.011857 systemd-journald[195]: Journal started May 9 00:16:56.011880 systemd-journald[195]: Runtime Journal (/run/log/journal/684d729d9cee48b894950ddf8415e62c) is 6.0M, max 48.3M, 42.2M free. May 9 00:16:56.004198 systemd-modules-load[196]: Inserted module 'overlay' May 9 00:16:56.016518 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:16:56.017370 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:16:56.021479 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:16:56.027740 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:16:56.038391 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:16:56.045722 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:16:56.042225 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:16:56.048192 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:16:56.050569 kernel: Bridge firewalling registered May 9 00:16:56.050209 systemd-modules-load[196]: Inserted module 'br_netfilter' May 9 00:16:56.052236 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:16:56.054986 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:16:56.060308 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:16:56.068036 dracut-cmdline[223]: dracut-dracut-053 May 9 00:16:56.071528 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e6c4805303143bfaf51e786bee05d9a5466809f675df313b1f69aaa84c2d4ce May 9 00:16:56.094138 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:16:56.099602 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:16:56.138222 systemd-resolved[254]: Positive Trust Anchors: May 9 00:16:56.138250 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:16:56.138292 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:16:56.141842 systemd-resolved[254]: Defaulting to hostname 'linux'. May 9 00:16:56.143294 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:16:56.149717 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:16:56.181500 kernel: SCSI subsystem initialized May 9 00:16:56.191479 kernel: Loading iSCSI transport class v2.0-870. May 9 00:16:56.203481 kernel: iscsi: registered transport (tcp) May 9 00:16:56.227512 kernel: iscsi: registered transport (qla4xxx) May 9 00:16:56.227627 kernel: QLogic iSCSI HBA Driver May 9 00:16:56.300140 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:16:56.312809 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:16:56.348872 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:16:56.348944 kernel: device-mapper: uevent: version 1.0.3 May 9 00:16:56.350049 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:16:56.394512 kernel: raid6: avx2x4 gen() 25996 MB/s May 9 00:16:56.411478 kernel: raid6: avx2x2 gen() 29185 MB/s May 9 00:16:56.428619 kernel: raid6: avx2x1 gen() 24651 MB/s May 9 00:16:56.428686 kernel: raid6: using algorithm avx2x2 gen() 29185 MB/s May 9 00:16:56.446786 kernel: raid6: .... xor() 18588 MB/s, rmw enabled May 9 00:16:56.446843 kernel: raid6: using avx2x2 recovery algorithm May 9 00:16:56.468469 kernel: xor: automatically using best checksumming function avx May 9 00:16:56.647487 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:16:56.662672 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:16:56.675781 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:16:56.691104 systemd-udevd[414]: Using default interface naming scheme 'v255'. May 9 00:16:56.696140 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:16:56.702820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:16:56.719721 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation May 9 00:16:56.761135 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:16:56.773722 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:16:56.867865 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:16:56.879616 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:16:56.898125 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:16:56.900052 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:16:56.901711 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:16:56.904205 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:16:56.912494 kernel: cryptd: max_cpu_qlen set to 1000 May 9 00:16:56.912728 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:16:56.919484 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 9 00:16:56.925678 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:16:56.926561 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:16:56.930235 kernel: AVX2 version of gcm_enc/dec engaged. May 9 00:16:56.930260 kernel: AES CTR mode by8 optimization enabled May 9 00:16:56.933836 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:16:56.933876 kernel: GPT:9289727 != 19775487 May 9 00:16:56.933892 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:16:56.933906 kernel: GPT:9289727 != 19775487 May 9 00:16:56.935020 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:16:56.935043 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:16:56.949256 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:16:56.949399 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:16:56.953680 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:16:56.955183 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:16:56.955418 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:16:56.958500 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:16:56.968488 kernel: libata version 3.00 loaded. May 9 00:16:56.974810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:16:56.989536 kernel: BTRFS: device fsid cea98156-267a-4592-a459-5921031522cf devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (459) May 9 00:16:56.992463 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (470) May 9 00:16:56.992488 kernel: ahci 0000:00:1f.2: version 3.0 May 9 00:16:56.996170 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 9 00:16:57.000454 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 9 00:16:57.000643 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 9 00:16:57.007939 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:16:57.067246 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:16:57.072458 kernel: scsi host0: ahci May 9 00:16:57.073455 kernel: scsi host1: ahci May 9 00:16:57.074770 kernel: scsi host2: ahci May 9 00:16:57.076629 kernel: scsi host3: ahci May 9 00:16:57.076923 kernel: scsi host4: ahci May 9 00:16:57.077641 kernel: scsi host5: ahci May 9 00:16:57.078819 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 9 00:16:57.079770 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 9 00:16:57.079795 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 9 00:16:57.080604 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 9 00:16:57.082558 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 9 00:16:57.082587 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 9 00:16:57.082828 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:16:57.093171 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:16:57.096556 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:16:57.117737 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:16:57.161249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:16:57.161383 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:16:57.165636 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:16:57.169713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:16:57.186969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:16:57.199792 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:16:57.242931 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:16:57.277197 disk-uuid[567]: Primary Header is updated. May 9 00:16:57.277197 disk-uuid[567]: Secondary Entries is updated. May 9 00:16:57.277197 disk-uuid[567]: Secondary Header is updated. May 9 00:16:57.335174 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:16:57.340498 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:16:57.395481 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 9 00:16:57.396458 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 9 00:16:57.438468 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 9 00:16:57.438525 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 9 00:16:57.439500 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 9 00:16:57.440471 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 9 00:16:57.441871 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 9 00:16:57.441896 kernel: ata3.00: applying bridge limits May 9 00:16:57.443474 kernel: ata3.00: configured for UDMA/100 May 9 00:16:57.443504 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 9 00:16:57.490528 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 9 00:16:57.491021 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 00:16:57.504472 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 9 00:16:58.342473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:16:58.343138 disk-uuid[582]: The operation has completed successfully. May 9 00:16:58.396004 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:16:58.396163 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:16:58.414691 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:16:58.419208 sh[597]: Success May 9 00:16:58.432466 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 9 00:16:58.472104 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:16:58.485582 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:16:58.488707 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:16:58.501947 kernel: BTRFS info (device dm-0): first mount of filesystem cea98156-267a-4592-a459-5921031522cf May 9 00:16:58.502018 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 00:16:58.502043 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:16:58.502971 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:16:58.503723 kernel: BTRFS info (device dm-0): using free space tree May 9 00:16:58.509193 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:16:58.511840 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:16:58.523770 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:16:58.524921 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:16:58.543617 kernel: BTRFS info (device vda6): first mount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:16:58.543704 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:16:58.543720 kernel: BTRFS info (device vda6): using free space tree May 9 00:16:58.547474 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:16:58.558847 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:16:58.560829 kernel: BTRFS info (device vda6): last unmount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:16:58.573037 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:16:58.581713 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:16:58.713865 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:16:58.715600 ignition[698]: Ignition 2.20.0 May 9 00:16:58.715622 ignition[698]: Stage: fetch-offline May 9 00:16:58.715701 ignition[698]: no configs at "/usr/lib/ignition/base.d" May 9 00:16:58.715716 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:16:58.715896 ignition[698]: parsed url from cmdline: "" May 9 00:16:58.715901 ignition[698]: no config URL provided May 9 00:16:58.715908 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:16:58.715920 ignition[698]: no config at "/usr/lib/ignition/user.ign" May 9 00:16:58.715967 ignition[698]: op(1): [started] loading QEMU firmware config module May 9 00:16:58.715978 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:16:58.734090 ignition[698]: op(1): [finished] loading QEMU firmware config module May 9 00:16:58.742646 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:16:58.778950 ignition[698]: parsing config with SHA512: f10b928cb67c56390027ae7c2e92bf5caaa8d4c0711416ad2c8368c28cd3cf6cc4491323a72dd9b7e45fc68d41faf4c227a807ccbca6e3829bb4bd58a32d0496 May 9 00:16:58.780147 systemd-networkd[786]: lo: Link UP May 9 00:16:58.780163 systemd-networkd[786]: lo: Gained carrier May 9 00:16:58.784184 systemd-networkd[786]: Enumeration completed May 9 00:16:58.784565 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:16:58.785577 systemd[1]: Reached target network.target - Network. May 9 00:16:58.790020 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:16:58.790033 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:16:58.793991 unknown[698]: fetched base config from "system" May 9 00:16:58.796047 ignition[698]: fetch-offline: fetch-offline passed May 9 00:16:58.794010 unknown[698]: fetched user config from "qemu" May 9 00:16:58.796217 ignition[698]: Ignition finished successfully May 9 00:16:58.794028 systemd-networkd[786]: eth0: Link UP May 9 00:16:58.794034 systemd-networkd[786]: eth0: Gained carrier May 9 00:16:58.794049 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:16:58.799327 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:16:58.802290 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:16:58.817505 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:16:58.817735 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:16:58.834078 ignition[789]: Ignition 2.20.0 May 9 00:16:58.834096 ignition[789]: Stage: kargs May 9 00:16:58.834340 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 9 00:16:58.834359 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:16:58.839310 ignition[789]: kargs: kargs passed May 9 00:16:58.840115 ignition[789]: Ignition finished successfully May 9 00:16:58.845185 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:16:58.857698 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:16:58.945631 ignition[798]: Ignition 2.20.0 May 9 00:16:58.945656 ignition[798]: Stage: disks May 9 00:16:58.945835 ignition[798]: no configs at "/usr/lib/ignition/base.d" May 9 00:16:58.945846 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:16:58.949382 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:16:58.946776 ignition[798]: disks: disks passed May 9 00:16:58.950765 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:16:58.946829 ignition[798]: Ignition finished successfully May 9 00:16:58.965625 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:16:58.966917 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:16:58.968552 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:16:58.968615 systemd[1]: Reached target basic.target - Basic System. May 9 00:16:58.980608 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:16:58.999748 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:16:59.321141 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:16:59.334571 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:16:59.502521 kernel: EXT4-fs (vda9): mounted filesystem 61492938-2ced-4ec2-b593-fc96fa0fefcc r/w with ordered data mode. Quota mode: none. May 9 00:16:59.503238 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:16:59.504863 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:16:59.516525 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:16:59.518531 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:16:59.520985 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:16:59.521035 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:16:59.531481 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (816) May 9 00:16:59.531508 kernel: BTRFS info (device vda6): first mount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:16:59.531520 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:16:59.531531 kernel: BTRFS info (device vda6): using free space tree May 9 00:16:59.531543 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:16:59.521061 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:16:59.529267 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:16:59.534025 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:16:59.536689 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:16:59.573008 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:16:59.636835 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory May 9 00:16:59.642788 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:16:59.648196 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:16:59.744613 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:16:59.749668 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:16:59.752959 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:16:59.790476 kernel: BTRFS info (device vda6): last unmount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:16:59.791726 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:16:59.824204 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:16:59.942524 ignition[934]: INFO : Ignition 2.20.0 May 9 00:16:59.942524 ignition[934]: INFO : Stage: mount May 9 00:16:59.944757 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:16:59.944757 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:16:59.944757 ignition[934]: INFO : mount: mount passed May 9 00:16:59.944757 ignition[934]: INFO : Ignition finished successfully May 9 00:16:59.949802 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:16:59.962691 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:17:00.291923 systemd-networkd[786]: eth0: Gained IPv6LL May 9 00:17:00.516673 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:17:00.526339 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) May 9 00:17:00.526389 kernel: BTRFS info (device vda6): first mount of filesystem 06eb5ada-09bb-4b72-a741-1d4e677346cf May 9 00:17:00.526404 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:17:00.527426 kernel: BTRFS info (device vda6): using free space tree May 9 00:17:00.531467 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:17:00.532950 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:17:00.555469 ignition[961]: INFO : Ignition 2.20.0 May 9 00:17:00.555469 ignition[961]: INFO : Stage: files May 9 00:17:00.557706 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:17:00.557706 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:17:00.561019 ignition[961]: DEBUG : files: compiled without relabeling support, skipping May 9 00:17:00.562920 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:17:00.562920 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:17:00.567644 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:17:00.569491 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:17:00.571639 unknown[961]: wrote ssh authorized keys file for user: core May 9 00:17:00.573169 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:17:00.575125 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 00:17:00.575125 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 00:17:00.575125 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:17:00.575125 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 9 00:17:00.733748 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 00:17:01.078206 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:17:01.078206 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:17:01.082847 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 9 00:17:01.451521 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 9 00:17:01.698961 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:17:01.698961 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:17:01.706827 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 9 00:17:02.016939 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 9 00:17:02.439410 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:17:02.439410 ignition[961]: INFO : files: op(d): [started] processing unit "containerd.service" May 9 00:17:02.443800 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 00:17:02.446578 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 00:17:02.446578 ignition[961]: INFO : files: op(d): [finished] processing unit "containerd.service" May 9 00:17:02.446578 ignition[961]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 9 00:17:02.446578 ignition[961]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:17:02.446578 ignition[961]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:17:02.446578 ignition[961]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 9 00:17:02.446578 ignition[961]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 9 00:17:02.446578 ignition[961]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:17:02.446578 ignition[961]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:17:02.446578 ignition[961]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 9 00:17:02.446578 ignition[961]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:17:02.542994 ignition[961]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:17:02.577828 ignition[961]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:17:02.579888 ignition[961]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:17:02.579888 ignition[961]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" May 9 00:17:02.579888 ignition[961]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:17:02.579888 ignition[961]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:17:02.579888 ignition[961]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:17:02.579888 ignition[961]: INFO : files: files passed May 9 00:17:02.579888 ignition[961]: INFO : Ignition finished successfully May 9 00:17:02.640156 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:17:02.647639 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:17:02.650615 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:17:02.653643 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:17:02.654853 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:17:02.661725 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:17:02.666454 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:17:02.666454 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:17:02.669774 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:17:02.673294 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:17:02.675077 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:17:02.688633 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:17:02.717733 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:17:02.717869 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:17:02.755022 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:17:02.758686 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:17:02.759951 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:17:02.761228 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:17:02.780274 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:17:02.794697 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:17:02.805614 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:17:02.808112 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:17:02.810635 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:17:02.812641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:17:02.813759 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:17:02.816506 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:17:02.818732 systemd[1]: Stopped target basic.target - Basic System. May 9 00:17:02.820712 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:17:02.823042 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:17:02.825516 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:17:02.827904 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:17:02.830119 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:17:02.832763 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:17:02.834894 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:17:02.836964 systemd[1]: Stopped target swap.target - Swaps. May 9 00:17:02.838605 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:17:02.839660 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:17:02.842109 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:17:02.844366 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:17:02.847074 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:17:02.848189 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:17:02.851301 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:17:02.852566 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:17:02.855468 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:17:02.856786 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:17:02.859694 systemd[1]: Stopped target paths.target - Path Units. May 9 00:17:02.861885 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:17:02.866509 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:17:02.869835 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:17:02.872049 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:17:02.874267 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:17:02.875302 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:17:02.877744 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:17:02.878867 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:17:02.881148 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:17:02.882342 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:17:02.884910 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:17:02.885930 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:17:02.906704 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:17:02.922943 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:17:02.924067 ignition[1015]: INFO : Ignition 2.20.0 May 9 00:17:02.924067 ignition[1015]: INFO : Stage: umount May 9 00:17:02.924067 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:17:02.924067 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:17:02.924067 ignition[1015]: INFO : umount: umount passed May 9 00:17:02.924067 ignition[1015]: INFO : Ignition finished successfully May 9 00:17:02.924129 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:17:02.943856 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:17:03.008990 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:17:03.009224 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:17:03.013029 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:17:03.014271 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:17:03.018755 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:17:03.018894 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:17:03.024384 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:17:03.026139 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:17:03.027361 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:17:03.030213 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:17:03.031229 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:17:03.034464 systemd[1]: Stopped target network.target - Network. May 9 00:17:03.036291 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:17:03.037250 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:17:03.039242 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:17:03.039299 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:17:03.042195 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:17:03.043148 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:17:03.045257 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:17:03.045315 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:17:03.048385 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:17:03.048461 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:17:03.051673 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:17:03.054218 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:17:03.060501 systemd-networkd[786]: eth0: DHCPv6 lease lost May 9 00:17:03.062474 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:17:03.062635 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:17:03.063135 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:17:03.063180 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:17:03.070602 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:17:03.072748 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:17:03.074009 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:17:03.076756 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:17:03.079943 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:17:03.081214 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:17:03.097863 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:17:03.142141 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:17:03.145240 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:17:03.146250 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:17:03.150119 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:17:03.151149 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:17:03.153274 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:17:03.154246 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:17:03.156427 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:17:03.157360 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:17:03.159591 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:17:03.160544 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:17:03.162641 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:17:03.163618 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:17:03.177643 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:17:03.178819 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:17:03.178893 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:17:03.181191 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:17:03.181257 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:17:03.183301 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:17:03.183365 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:17:03.185683 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:17:03.185746 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:17:03.186750 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:17:03.186811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:17:03.187739 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:17:03.187877 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:17:03.193594 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:17:03.204593 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:17:03.214457 systemd[1]: Switching root. May 9 00:17:03.255245 systemd-journald[195]: Journal stopped May 9 00:17:06.774479 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). May 9 00:17:06.774580 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:17:06.774597 kernel: SELinux: policy capability open_perms=1 May 9 00:17:06.774611 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:17:06.774623 kernel: SELinux: policy capability always_check_network=0 May 9 00:17:06.774639 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:17:06.774651 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:17:06.774662 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:17:06.774674 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:17:06.774690 kernel: audit: type=1403 audit(1746749825.776:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:17:06.774715 systemd[1]: Successfully loaded SELinux policy in 46.964ms. May 9 00:17:06.774743 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.868ms. May 9 00:17:06.774762 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:17:06.774789 systemd[1]: Detected virtualization kvm. May 9 00:17:06.774819 systemd[1]: Detected architecture x86-64. May 9 00:17:06.774834 systemd[1]: Detected first boot. May 9 00:17:06.774850 systemd[1]: Initializing machine ID from VM UUID. May 9 00:17:06.774866 zram_generator::config[1077]: No configuration found. May 9 00:17:06.774883 systemd[1]: Populated /etc with preset unit settings. May 9 00:17:06.774907 systemd[1]: Queued start job for default target multi-user.target. May 9 00:17:06.774924 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:17:06.774940 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:17:06.774960 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:17:06.774977 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:17:06.774992 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:17:06.775008 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:17:06.775025 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:17:06.775041 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:17:06.775056 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:17:06.775073 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:17:06.775094 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:17:06.775110 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:17:06.775125 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:17:06.775141 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:17:06.775157 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:17:06.775179 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 00:17:06.775195 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:17:06.775210 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:17:06.775226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:17:06.775254 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:17:06.775271 systemd[1]: Reached target slices.target - Slice Units. May 9 00:17:06.775285 systemd[1]: Reached target swap.target - Swaps. May 9 00:17:06.775297 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:17:06.775310 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:17:06.775323 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:17:06.775335 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:17:06.775347 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:17:06.775360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:17:06.775375 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:17:06.775387 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:17:06.775401 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:17:06.775415 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:17:06.775427 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:17:06.776467 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:17:06.776641 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:17:06.776666 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:17:06.776688 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:17:06.779390 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:17:06.779448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:17:06.779468 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:17:06.779485 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:17:06.779499 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:17:06.779511 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:17:06.779524 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:17:06.779537 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:17:06.779559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:17:06.779572 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:17:06.779585 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 9 00:17:06.779599 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 9 00:17:06.779611 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:17:06.779624 kernel: fuse: init (API version 7.39) May 9 00:17:06.779638 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:17:06.779650 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:17:06.779665 kernel: loop: module loaded May 9 00:17:06.779678 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:17:06.779690 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:17:06.779703 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:17:06.779716 kernel: ACPI: bus type drm_connector registered May 9 00:17:06.779728 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:17:06.779781 systemd-journald[1155]: Collecting audit messages is disabled. May 9 00:17:06.779819 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:17:06.779836 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:17:06.779848 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:17:06.779861 systemd-journald[1155]: Journal started May 9 00:17:06.779885 systemd-journald[1155]: Runtime Journal (/run/log/journal/684d729d9cee48b894950ddf8415e62c) is 6.0M, max 48.3M, 42.2M free. May 9 00:17:06.786230 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:17:06.788203 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:17:06.789778 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:17:06.791561 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:17:06.793482 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:17:06.793761 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:17:06.795773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:17:06.796070 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:17:06.798178 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:17:06.798501 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:17:06.800228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:17:06.800547 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:17:06.802672 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:17:06.802949 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:17:06.804737 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:17:06.805036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:17:06.807143 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:17:06.828248 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:17:06.830178 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:17:06.845596 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:17:06.860543 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:17:06.863665 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:17:06.865214 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:17:06.871949 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:17:06.888882 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:17:06.890361 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:17:06.897197 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:17:06.898881 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:17:06.900331 systemd-journald[1155]: Time spent on flushing to /var/log/journal/684d729d9cee48b894950ddf8415e62c is 14.428ms for 1031 entries. May 9 00:17:06.900331 systemd-journald[1155]: System Journal (/var/log/journal/684d729d9cee48b894950ddf8415e62c) is 8.0M, max 195.6M, 187.6M free. May 9 00:17:07.203867 systemd-journald[1155]: Received client request to flush runtime journal. May 9 00:17:06.901384 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:17:06.906610 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:17:06.912379 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:17:06.916841 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:17:06.918280 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:17:06.932647 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:17:06.941957 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:17:06.946212 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 00:17:06.950751 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. May 9 00:17:06.950766 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. May 9 00:17:06.956696 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:17:07.104918 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:17:07.106325 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:17:07.206222 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:17:07.211526 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:17:07.229721 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:17:07.258089 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:17:07.274701 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:17:07.307372 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. May 9 00:17:07.307395 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. May 9 00:17:07.313789 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:17:07.825993 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:17:07.838592 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:17:07.866558 systemd-udevd[1242]: Using default interface naming scheme 'v255'. May 9 00:17:07.884249 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:17:07.892945 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:17:07.911682 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:17:07.941953 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1254) May 9 00:17:07.941704 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 9 00:17:07.987571 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:17:07.995311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:17:07.998458 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 9 00:17:08.004492 kernel: ACPI: button: Power Button [PWRF] May 9 00:17:08.022534 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 9 00:17:08.030529 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 9 00:17:08.041012 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 9 00:17:08.046720 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 9 00:17:08.048392 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 9 00:17:08.052464 kernel: mousedev: PS/2 mouse device common for all mice May 9 00:17:08.055589 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:17:08.067014 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:17:08.067579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:17:08.068424 systemd-networkd[1247]: lo: Link UP May 9 00:17:08.068453 systemd-networkd[1247]: lo: Gained carrier May 9 00:17:08.072579 systemd-networkd[1247]: Enumeration completed May 9 00:17:08.073084 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:17:08.073089 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:17:08.075635 systemd-networkd[1247]: eth0: Link UP May 9 00:17:08.075649 systemd-networkd[1247]: eth0: Gained carrier May 9 00:17:08.075665 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:17:08.080897 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:17:08.085896 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:17:08.101644 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:17:08.105350 systemd-networkd[1247]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:17:08.143272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:17:08.174941 kernel: kvm_amd: TSC scaling supported May 9 00:17:08.175053 kernel: kvm_amd: Nested Virtualization enabled May 9 00:17:08.175140 kernel: kvm_amd: Nested Paging enabled May 9 00:17:08.175955 kernel: kvm_amd: LBR virtualization supported May 9 00:17:08.175979 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 9 00:17:08.176547 kernel: kvm_amd: Virtual GIF supported May 9 00:17:08.197534 kernel: EDAC MC: Ver: 3.0.0 May 9 00:17:08.230352 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:17:08.248902 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:17:08.259685 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:17:08.290950 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:17:08.292581 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:17:08.302576 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:17:08.308619 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:17:08.350885 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:17:08.352502 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:17:08.353806 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:17:08.353827 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:17:08.354916 systemd[1]: Reached target machines.target - Containers. May 9 00:17:08.357310 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:17:08.370609 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:17:08.373727 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:17:08.375064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:17:08.376308 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:17:08.379361 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:17:08.382475 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:17:08.385158 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:17:08.454674 kernel: loop0: detected capacity change from 0 to 140992 May 9 00:17:08.458573 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:17:08.476465 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:17:08.507491 kernel: loop1: detected capacity change from 0 to 138184 May 9 00:17:08.559479 kernel: loop2: detected capacity change from 0 to 210664 May 9 00:17:08.641473 kernel: loop3: detected capacity change from 0 to 140992 May 9 00:17:08.703463 kernel: loop4: detected capacity change from 0 to 138184 May 9 00:17:08.726478 kernel: loop5: detected capacity change from 0 to 210664 May 9 00:17:08.780468 (sd-merge)[1314]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:17:08.781152 (sd-merge)[1314]: Merged extensions into '/usr'. May 9 00:17:08.802704 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:17:08.803837 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:17:08.806625 systemd[1]: Reloading requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:17:08.806655 systemd[1]: Reloading... May 9 00:17:08.866478 zram_generator::config[1347]: No configuration found. May 9 00:17:08.921508 ldconfig[1300]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:17:08.993482 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:17:09.063651 systemd[1]: Reloading finished in 256 ms. May 9 00:17:09.081805 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:17:09.083945 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:17:09.098620 systemd[1]: Starting ensure-sysext.service... May 9 00:17:09.101272 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:17:09.108218 systemd[1]: Reloading requested from client PID 1388 ('systemctl') (unit ensure-sysext.service)... May 9 00:17:09.108234 systemd[1]: Reloading... May 9 00:17:09.126753 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:17:09.127117 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:17:09.128132 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:17:09.128462 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. May 9 00:17:09.128543 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. May 9 00:17:09.132565 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:17:09.132579 systemd-tmpfiles[1389]: Skipping /boot May 9 00:17:09.149586 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:17:09.149607 systemd-tmpfiles[1389]: Skipping /boot May 9 00:17:09.182919 zram_generator::config[1421]: No configuration found. May 9 00:17:09.315673 systemd-networkd[1247]: eth0: Gained IPv6LL May 9 00:17:09.326196 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:17:09.400450 systemd[1]: Reloading finished in 291 ms. May 9 00:17:09.419570 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:17:09.432072 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:17:09.444059 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 00:17:09.448138 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:17:09.451791 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:17:09.456775 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:17:09.471965 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:17:09.480701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:17:09.480936 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:17:09.482727 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:17:09.489238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:17:09.496942 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:17:09.500363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:17:09.500541 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:17:09.501974 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:17:09.504899 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:17:09.505274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:17:09.513944 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:17:09.514184 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:17:09.517383 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:17:09.517655 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:17:09.523183 augenrules[1496]: No rules May 9 00:17:09.525754 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:17:09.526125 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 00:17:09.533875 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:17:09.534313 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:17:09.543860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:17:09.547573 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:17:09.553710 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:17:09.555017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:17:09.560910 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:17:09.562193 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:17:09.564167 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:17:09.566552 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:17:09.568922 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:17:09.569263 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:17:09.571526 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:17:09.571771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:17:09.573665 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:17:09.573928 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:17:09.577426 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:17:09.578015 systemd-resolved[1469]: Positive Trust Anchors: May 9 00:17:09.579615 systemd-resolved[1469]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:17:09.579669 systemd-resolved[1469]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:17:09.586535 systemd-resolved[1469]: Defaulting to hostname 'linux'. May 9 00:17:09.592902 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:17:09.594768 systemd[1]: Reached target network.target - Network. May 9 00:17:09.595901 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:17:09.597579 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:17:09.599344 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:17:09.621710 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 00:17:09.622843 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:17:09.624395 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:17:09.626918 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:17:09.632197 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:17:09.637571 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:17:09.639041 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:17:09.639213 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:17:09.639363 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:17:09.641069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:17:09.641382 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:17:09.643383 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:17:09.643777 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:17:09.646252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:17:09.646647 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:17:09.648733 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:17:09.649091 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:17:09.649406 augenrules[1526]: /sbin/augenrules: No change May 9 00:17:09.652769 systemd[1]: Finished ensure-sysext.service. May 9 00:17:09.658871 augenrules[1555]: No rules May 9 00:17:09.660746 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:17:09.661131 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 00:17:09.665370 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:17:09.665460 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:17:09.680839 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:17:09.751477 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:17:09.752969 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:17:09.754250 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:17:10.735537 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:17:10.735581 systemd-resolved[1469]: Clock change detected. Flushing caches. May 9 00:17:10.736847 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:17:10.736879 systemd-timesyncd[1565]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:17:10.738177 systemd-timesyncd[1565]: Initial clock synchronization to Fri 2025-05-09 00:17:10.735492 UTC. May 9 00:17:10.738201 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:17:10.738245 systemd[1]: Reached target paths.target - Path Units. May 9 00:17:10.739369 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:17:10.740614 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:17:10.741982 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:17:10.743374 systemd[1]: Reached target timers.target - Timer Units. May 9 00:17:10.745430 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:17:10.749008 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:17:10.751608 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:17:10.757663 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:17:10.758923 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:17:10.760077 systemd[1]: Reached target basic.target - Basic System. May 9 00:17:10.761437 systemd[1]: System is tainted: cgroupsv1 May 9 00:17:10.761486 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:17:10.761518 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:17:10.763380 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:17:10.766124 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:17:10.769227 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:17:10.774977 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:17:10.779622 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:17:10.781825 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:17:10.783843 jq[1572]: false May 9 00:17:10.786713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:17:10.791016 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:17:10.795965 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:17:10.802006 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:17:10.803441 extend-filesystems[1575]: Found loop3 May 9 00:17:10.803441 extend-filesystems[1575]: Found loop4 May 9 00:17:10.803441 extend-filesystems[1575]: Found loop5 May 9 00:17:10.803441 extend-filesystems[1575]: Found sr0 May 9 00:17:10.803441 extend-filesystems[1575]: Found vda May 9 00:17:10.803441 extend-filesystems[1575]: Found vda1 May 9 00:17:10.803441 extend-filesystems[1575]: Found vda2 May 9 00:17:10.803441 extend-filesystems[1575]: Found vda3 May 9 00:17:10.803441 extend-filesystems[1575]: Found usr May 9 00:17:10.803441 extend-filesystems[1575]: Found vda4 May 9 00:17:10.803441 extend-filesystems[1575]: Found vda6 May 9 00:17:10.803441 extend-filesystems[1575]: Found vda7 May 9 00:17:10.803441 extend-filesystems[1575]: Found vda9 May 9 00:17:10.803441 extend-filesystems[1575]: Checking size of /dev/vda9 May 9 00:17:10.817535 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:17:10.829540 extend-filesystems[1575]: Resized partition /dev/vda9 May 9 00:17:10.836151 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:17:10.815394 dbus-daemon[1571]: [system] SELinux support is enabled May 9 00:17:10.823124 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:17:10.838560 extend-filesystems[1594]: resize2fs 1.47.1 (20-May-2024) May 9 00:17:10.831447 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:17:10.833393 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:17:10.840457 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:17:10.881307 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:17:10.893833 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1243) May 9 00:17:10.892484 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:17:10.966162 extend-filesystems[1594]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:17:10.966162 extend-filesystems[1594]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:17:10.966162 extend-filesystems[1594]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:17:10.895845 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:17:10.992953 extend-filesystems[1575]: Resized filesystem in /dev/vda9 May 9 00:17:10.994201 update_engine[1599]: I20250509 00:17:10.963814 1599 main.cc:92] Flatcar Update Engine starting May 9 00:17:10.994201 update_engine[1599]: I20250509 00:17:10.965713 1599 update_check_scheduler.cc:74] Next update check in 8m42s May 9 00:17:10.939020 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:17:11.003984 jq[1603]: true May 9 00:17:10.939609 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:17:10.974041 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:17:10.977662 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:17:10.988482 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:17:10.988808 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:17:10.990641 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:17:11.000565 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:17:11.001002 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:17:11.035796 (ntainerd)[1621]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:17:11.041458 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:17:11.041836 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:17:11.050309 jq[1620]: true May 9 00:17:11.074815 sshd_keygen[1605]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:17:11.079264 tar[1619]: linux-amd64/helm May 9 00:17:11.090331 systemd[1]: Started update-engine.service - Update Engine. May 9 00:17:11.092049 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:17:11.092164 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:17:11.092203 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:17:11.093463 systemd-logind[1597]: Watching system buttons on /dev/input/event1 (Power Button) May 9 00:17:11.093489 systemd-logind[1597]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 00:17:11.096682 systemd-logind[1597]: New seat seat0. May 9 00:17:11.097582 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:17:11.097611 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:17:11.102797 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:17:11.109477 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:17:11.114987 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:17:11.127640 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:17:11.138641 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:17:11.157025 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:17:11.157885 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:17:11.167623 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:17:11.175759 locksmithd[1660]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:17:11.193405 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:17:11.203624 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:17:11.243724 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 00:17:11.245876 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:17:11.376316 bash[1658]: Updated "/home/core/.ssh/authorized_keys" May 9 00:17:11.379307 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:17:11.382868 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:17:11.425303 containerd[1621]: time="2025-05-09T00:17:11.422640536Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 9 00:17:11.451358 containerd[1621]: time="2025-05-09T00:17:11.451276579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:17:11.453736 containerd[1621]: time="2025-05-09T00:17:11.453617038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:17:11.453736 containerd[1621]: time="2025-05-09T00:17:11.453646333Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:17:11.453736 containerd[1621]: time="2025-05-09T00:17:11.453664067Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:17:11.453943 containerd[1621]: time="2025-05-09T00:17:11.453870764Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:17:11.453943 containerd[1621]: time="2025-05-09T00:17:11.453890682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:17:11.453999 containerd[1621]: time="2025-05-09T00:17:11.453971102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:17:11.453999 containerd[1621]: time="2025-05-09T00:17:11.453984247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:17:11.454315 containerd[1621]: time="2025-05-09T00:17:11.454274712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:17:11.454315 containerd[1621]: time="2025-05-09T00:17:11.454306181Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:17:11.454382 containerd[1621]: time="2025-05-09T00:17:11.454321029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:17:11.454382 containerd[1621]: time="2025-05-09T00:17:11.454330777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:17:11.455326 containerd[1621]: time="2025-05-09T00:17:11.454430334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:17:11.455326 containerd[1621]: time="2025-05-09T00:17:11.454687326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:17:11.455326 containerd[1621]: time="2025-05-09T00:17:11.454864167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:17:11.455326 containerd[1621]: time="2025-05-09T00:17:11.454877442Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:17:11.455326 containerd[1621]: time="2025-05-09T00:17:11.454985224Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:17:11.455326 containerd[1621]: time="2025-05-09T00:17:11.455049254Z" level=info msg="metadata content store policy set" policy=shared May 9 00:17:11.464342 containerd[1621]: time="2025-05-09T00:17:11.464135855Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:17:11.464342 containerd[1621]: time="2025-05-09T00:17:11.464241253Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:17:11.464342 containerd[1621]: time="2025-05-09T00:17:11.464262452Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:17:11.464342 containerd[1621]: time="2025-05-09T00:17:11.464295975Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:17:11.464342 containerd[1621]: time="2025-05-09T00:17:11.464314931Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:17:11.464615 containerd[1621]: time="2025-05-09T00:17:11.464563276Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:17:11.464959 containerd[1621]: time="2025-05-09T00:17:11.464936526Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:17:11.465103 containerd[1621]: time="2025-05-09T00:17:11.465073533Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:17:11.465136 containerd[1621]: time="2025-05-09T00:17:11.465111044Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:17:11.465136 containerd[1621]: time="2025-05-09T00:17:11.465129829Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:17:11.465189 containerd[1621]: time="2025-05-09T00:17:11.465145408Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:17:11.465213 containerd[1621]: time="2025-05-09T00:17:11.465184562Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:17:11.465213 containerd[1621]: time="2025-05-09T00:17:11.465203828Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:17:11.465255 containerd[1621]: time="2025-05-09T00:17:11.465221541Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:17:11.465255 containerd[1621]: time="2025-05-09T00:17:11.465239605Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:17:11.465318 containerd[1621]: time="2025-05-09T00:17:11.465256366Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:17:11.465318 containerd[1621]: time="2025-05-09T00:17:11.465271264Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:17:11.465386 containerd[1621]: time="2025-05-09T00:17:11.465366162Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:17:11.465413 containerd[1621]: time="2025-05-09T00:17:11.465396639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465436 containerd[1621]: time="2025-05-09T00:17:11.465415555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465470 containerd[1621]: time="2025-05-09T00:17:11.465445030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465470 containerd[1621]: time="2025-05-09T00:17:11.465461711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465520 containerd[1621]: time="2025-05-09T00:17:11.465477140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465520 containerd[1621]: time="2025-05-09T00:17:11.465493351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465520 containerd[1621]: time="2025-05-09T00:17:11.465517817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465588 containerd[1621]: time="2025-05-09T00:17:11.465533907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465588 containerd[1621]: time="2025-05-09T00:17:11.465550047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465588 containerd[1621]: time="2025-05-09T00:17:11.465567440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465588 containerd[1621]: time="2025-05-09T00:17:11.465581205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465679 containerd[1621]: time="2025-05-09T00:17:11.465594981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465679 containerd[1621]: time="2025-05-09T00:17:11.465609859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465679 containerd[1621]: time="2025-05-09T00:17:11.465625899Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:17:11.465679 containerd[1621]: time="2025-05-09T00:17:11.465649313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465679 containerd[1621]: time="2025-05-09T00:17:11.465664662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465679 containerd[1621]: time="2025-05-09T00:17:11.465677085Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:17:11.465822 containerd[1621]: time="2025-05-09T00:17:11.465739773Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:17:11.465822 containerd[1621]: time="2025-05-09T00:17:11.465761894Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:17:11.465869 containerd[1621]: time="2025-05-09T00:17:11.465774899Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:17:11.465893 containerd[1621]: time="2025-05-09T00:17:11.465879906Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:17:11.465917 containerd[1621]: time="2025-05-09T00:17:11.465892820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:17:11.465940 containerd[1621]: time="2025-05-09T00:17:11.465919450Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:17:11.465940 containerd[1621]: time="2025-05-09T00:17:11.465932895Z" level=info msg="NRI interface is disabled by configuration." May 9 00:17:11.465989 containerd[1621]: time="2025-05-09T00:17:11.465955678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:17:11.466352 containerd[1621]: time="2025-05-09T00:17:11.466276980Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:17:11.466352 containerd[1621]: time="2025-05-09T00:17:11.466349476Z" level=info msg="Connect containerd service" May 9 00:17:11.466615 containerd[1621]: time="2025-05-09T00:17:11.466414037Z" level=info msg="using legacy CRI server" May 9 00:17:11.466615 containerd[1621]: time="2025-05-09T00:17:11.466424246Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:17:11.466615 containerd[1621]: time="2025-05-09T00:17:11.466566523Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:17:11.467504 containerd[1621]: time="2025-05-09T00:17:11.467467303Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:17:11.467659 containerd[1621]: time="2025-05-09T00:17:11.467618075Z" level=info msg="Start subscribing containerd event" May 9 00:17:11.467690 containerd[1621]: time="2025-05-09T00:17:11.467675293Z" level=info msg="Start recovering state" May 9 00:17:11.467763 containerd[1621]: time="2025-05-09T00:17:11.467747398Z" level=info msg="Start event monitor" May 9 00:17:11.467794 containerd[1621]: time="2025-05-09T00:17:11.467766624Z" level=info msg="Start snapshots syncer" May 9 00:17:11.467794 containerd[1621]: time="2025-05-09T00:17:11.467778446Z" level=info msg="Start cni network conf syncer for default" May 9 00:17:11.467794 containerd[1621]: time="2025-05-09T00:17:11.467787613Z" level=info msg="Start streaming server" May 9 00:17:11.468353 containerd[1621]: time="2025-05-09T00:17:11.468319320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:17:11.468390 containerd[1621]: time="2025-05-09T00:17:11.468382479Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:17:11.468598 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:17:11.470004 containerd[1621]: time="2025-05-09T00:17:11.468750820Z" level=info msg="containerd successfully booted in 0.054127s" May 9 00:17:11.618078 tar[1619]: linux-amd64/LICENSE May 9 00:17:11.618232 tar[1619]: linux-amd64/README.md May 9 00:17:11.638300 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:17:12.232200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:17:12.234362 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:17:12.235985 systemd[1]: Startup finished in 11.596s (kernel) + 5.520s (userspace) = 17.116s. May 9 00:17:12.238856 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:17:12.726836 kubelet[1704]: E0509 00:17:12.726597 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:17:12.731474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:17:12.731841 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:17:19.832595 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:17:19.840524 systemd[1]: Started sshd@0-10.0.0.125:22-10.0.0.1:49600.service - OpenSSH per-connection server daemon (10.0.0.1:49600). May 9 00:17:19.882814 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 49600 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:17:19.884827 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:19.894708 systemd-logind[1597]: New session 1 of user core. May 9 00:17:19.896108 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:17:19.902620 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:17:19.917840 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:17:19.929608 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:17:19.933476 (systemd)[1724]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:17:20.054984 systemd[1724]: Queued start job for default target default.target. May 9 00:17:20.055462 systemd[1724]: Created slice app.slice - User Application Slice. May 9 00:17:20.055486 systemd[1724]: Reached target paths.target - Paths. May 9 00:17:20.055501 systemd[1724]: Reached target timers.target - Timers. May 9 00:17:20.067379 systemd[1724]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:17:20.074149 systemd[1724]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:17:20.074211 systemd[1724]: Reached target sockets.target - Sockets. May 9 00:17:20.074224 systemd[1724]: Reached target basic.target - Basic System. May 9 00:17:20.074263 systemd[1724]: Reached target default.target - Main User Target. May 9 00:17:20.074315 systemd[1724]: Startup finished in 133ms. May 9 00:17:20.075123 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:17:20.077326 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:17:20.136651 systemd[1]: Started sshd@1-10.0.0.125:22-10.0.0.1:49606.service - OpenSSH per-connection server daemon (10.0.0.1:49606). May 9 00:17:20.169758 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 49606 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:17:20.171356 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:20.175749 systemd-logind[1597]: New session 2 of user core. May 9 00:17:20.194658 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:17:20.250798 sshd[1739]: Connection closed by 10.0.0.1 port 49606 May 9 00:17:20.251234 sshd-session[1736]: pam_unix(sshd:session): session closed for user core May 9 00:17:20.265827 systemd[1]: Started sshd@2-10.0.0.125:22-10.0.0.1:49612.service - OpenSSH per-connection server daemon (10.0.0.1:49612). May 9 00:17:20.266774 systemd[1]: sshd@1-10.0.0.125:22-10.0.0.1:49606.service: Deactivated successfully. May 9 00:17:20.269035 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:17:20.269956 systemd-logind[1597]: Session 2 logged out. Waiting for processes to exit. May 9 00:17:20.271633 systemd-logind[1597]: Removed session 2. May 9 00:17:20.300142 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 49612 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:17:20.301936 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:20.306312 systemd-logind[1597]: New session 3 of user core. May 9 00:17:20.323674 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:17:20.376398 sshd[1747]: Connection closed by 10.0.0.1 port 49612 May 9 00:17:20.376858 sshd-session[1742]: pam_unix(sshd:session): session closed for user core May 9 00:17:20.393582 systemd[1]: Started sshd@3-10.0.0.125:22-10.0.0.1:49616.service - OpenSSH per-connection server daemon (10.0.0.1:49616). May 9 00:17:20.394249 systemd[1]: sshd@2-10.0.0.125:22-10.0.0.1:49612.service: Deactivated successfully. May 9 00:17:20.397395 systemd-logind[1597]: Session 3 logged out. Waiting for processes to exit. May 9 00:17:20.399140 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:17:20.400240 systemd-logind[1597]: Removed session 3. May 9 00:17:20.425314 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 49616 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:17:20.426995 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:20.430958 systemd-logind[1597]: New session 4 of user core. May 9 00:17:20.440593 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:17:20.496229 sshd[1755]: Connection closed by 10.0.0.1 port 49616 May 9 00:17:20.496680 sshd-session[1749]: pam_unix(sshd:session): session closed for user core May 9 00:17:20.511600 systemd[1]: Started sshd@4-10.0.0.125:22-10.0.0.1:49630.service - OpenSSH per-connection server daemon (10.0.0.1:49630). May 9 00:17:20.512124 systemd[1]: sshd@3-10.0.0.125:22-10.0.0.1:49616.service: Deactivated successfully. May 9 00:17:20.514486 systemd-logind[1597]: Session 4 logged out. Waiting for processes to exit. May 9 00:17:20.515132 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:17:20.516669 systemd-logind[1597]: Removed session 4. May 9 00:17:20.545048 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 49630 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:17:20.546774 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:20.551344 systemd-logind[1597]: New session 5 of user core. May 9 00:17:20.561547 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:17:20.623496 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:17:20.623979 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:17:20.642021 sudo[1764]: pam_unix(sudo:session): session closed for user root May 9 00:17:20.644365 sshd[1763]: Connection closed by 10.0.0.1 port 49630 May 9 00:17:20.644833 sshd-session[1757]: pam_unix(sshd:session): session closed for user core May 9 00:17:20.655514 systemd[1]: Started sshd@5-10.0.0.125:22-10.0.0.1:49642.service - OpenSSH per-connection server daemon (10.0.0.1:49642). May 9 00:17:20.655989 systemd[1]: sshd@4-10.0.0.125:22-10.0.0.1:49630.service: Deactivated successfully. May 9 00:17:20.658268 systemd-logind[1597]: Session 5 logged out. Waiting for processes to exit. May 9 00:17:20.659082 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:17:20.660469 systemd-logind[1597]: Removed session 5. May 9 00:17:20.694936 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 49642 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:17:20.696751 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:20.701067 systemd-logind[1597]: New session 6 of user core. May 9 00:17:20.714547 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:17:20.769434 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:17:20.769779 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:17:20.773157 sudo[1774]: pam_unix(sudo:session): session closed for user root May 9 00:17:20.780241 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 00:17:20.780614 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:17:20.799709 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 00:17:20.830143 augenrules[1796]: No rules May 9 00:17:20.831343 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:17:20.831805 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 00:17:20.833167 sudo[1773]: pam_unix(sudo:session): session closed for user root May 9 00:17:20.835137 sshd[1772]: Connection closed by 10.0.0.1 port 49642 May 9 00:17:20.835403 sshd-session[1766]: pam_unix(sshd:session): session closed for user core May 9 00:17:20.851683 systemd[1]: Started sshd@6-10.0.0.125:22-10.0.0.1:49658.service - OpenSSH per-connection server daemon (10.0.0.1:49658). May 9 00:17:20.852423 systemd[1]: sshd@5-10.0.0.125:22-10.0.0.1:49642.service: Deactivated successfully. May 9 00:17:20.854497 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:17:20.855182 systemd-logind[1597]: Session 6 logged out. Waiting for processes to exit. May 9 00:17:20.856539 systemd-logind[1597]: Removed session 6. May 9 00:17:20.883548 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 49658 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:17:20.885094 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:17:20.889648 systemd-logind[1597]: New session 7 of user core. May 9 00:17:20.899777 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:17:20.957084 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:17:20.957573 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:17:21.607556 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:17:21.607817 (dockerd)[1829]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:17:22.257194 dockerd[1829]: time="2025-05-09T00:17:22.257105045Z" level=info msg="Starting up" May 9 00:17:22.981961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:17:22.992461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:17:23.670086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:17:23.681964 (kubelet)[1865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:17:23.709357 dockerd[1829]: time="2025-05-09T00:17:23.709268800Z" level=info msg="Loading containers: start." May 9 00:17:23.827095 kubelet[1865]: E0509 00:17:23.827010 1865 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:17:23.835149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:17:23.835542 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:17:23.954332 kernel: Initializing XFRM netlink socket May 9 00:17:24.062840 systemd-networkd[1247]: docker0: Link UP May 9 00:17:24.119750 dockerd[1829]: time="2025-05-09T00:17:24.119663988Z" level=info msg="Loading containers: done." May 9 00:17:24.140657 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2984501264-merged.mount: Deactivated successfully. May 9 00:17:24.141621 dockerd[1829]: time="2025-05-09T00:17:24.141548549Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:17:24.141724 dockerd[1829]: time="2025-05-09T00:17:24.141699312Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 9 00:17:24.141899 dockerd[1829]: time="2025-05-09T00:17:24.141868238Z" level=info msg="Daemon has completed initialization" May 9 00:17:24.196055 dockerd[1829]: time="2025-05-09T00:17:24.195809107Z" level=info msg="API listen on /run/docker.sock" May 9 00:17:24.196275 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:17:25.648390 containerd[1621]: time="2025-05-09T00:17:25.648332467Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 9 00:17:28.240069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167985712.mount: Deactivated successfully. May 9 00:17:30.172375 containerd[1621]: time="2025-05-09T00:17:30.172274984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:30.173134 containerd[1621]: time="2025-05-09T00:17:30.173044817Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 9 00:17:30.174438 containerd[1621]: time="2025-05-09T00:17:30.174397614Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:30.178845 containerd[1621]: time="2025-05-09T00:17:30.178787317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:30.180220 containerd[1621]: time="2025-05-09T00:17:30.180164910Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 4.531779084s" May 9 00:17:30.180331 containerd[1621]: time="2025-05-09T00:17:30.180218861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 9 00:17:30.213006 containerd[1621]: time="2025-05-09T00:17:30.212945275Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 9 00:17:32.599009 containerd[1621]: time="2025-05-09T00:17:32.598918827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:32.725118 containerd[1621]: time="2025-05-09T00:17:32.725011145Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 9 00:17:32.808195 containerd[1621]: time="2025-05-09T00:17:32.808117199Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:32.869436 containerd[1621]: time="2025-05-09T00:17:32.869242400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:32.870559 containerd[1621]: time="2025-05-09T00:17:32.870481664Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.657468472s" May 9 00:17:32.870559 containerd[1621]: time="2025-05-09T00:17:32.870547548Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 9 00:17:32.899129 containerd[1621]: time="2025-05-09T00:17:32.899078994Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 9 00:17:34.085675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 00:17:34.101429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:17:34.401568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:17:34.402667 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:17:34.533842 kubelet[2142]: E0509 00:17:34.533778 2142 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:17:34.540003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:17:34.540574 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:17:35.158234 containerd[1621]: time="2025-05-09T00:17:35.158140912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:35.162477 containerd[1621]: time="2025-05-09T00:17:35.162405229Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 9 00:17:35.164927 containerd[1621]: time="2025-05-09T00:17:35.164859251Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:35.173319 containerd[1621]: time="2025-05-09T00:17:35.173229989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:35.174542 containerd[1621]: time="2025-05-09T00:17:35.174467600Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 2.275341247s" May 9 00:17:35.174588 containerd[1621]: time="2025-05-09T00:17:35.174545386Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 9 00:17:35.208390 containerd[1621]: time="2025-05-09T00:17:35.208340805Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 00:17:38.618669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85405435.mount: Deactivated successfully. May 9 00:17:39.046416 containerd[1621]: time="2025-05-09T00:17:39.046184141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:39.047140 containerd[1621]: time="2025-05-09T00:17:39.047086954Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 9 00:17:39.048037 containerd[1621]: time="2025-05-09T00:17:39.047971934Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:39.050191 containerd[1621]: time="2025-05-09T00:17:39.050147964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:39.050733 containerd[1621]: time="2025-05-09T00:17:39.050691233Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 3.842109066s" May 9 00:17:39.050776 containerd[1621]: time="2025-05-09T00:17:39.050734254Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 9 00:17:39.080515 containerd[1621]: time="2025-05-09T00:17:39.080458007Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 00:17:39.621750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224378394.mount: Deactivated successfully. May 9 00:17:40.416768 containerd[1621]: time="2025-05-09T00:17:40.416710908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:40.417627 containerd[1621]: time="2025-05-09T00:17:40.417575169Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 9 00:17:40.418818 containerd[1621]: time="2025-05-09T00:17:40.418785609Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:40.422534 containerd[1621]: time="2025-05-09T00:17:40.422503541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:40.423398 containerd[1621]: time="2025-05-09T00:17:40.423364145Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.342864901s" May 9 00:17:40.423398 containerd[1621]: time="2025-05-09T00:17:40.423398199Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 9 00:17:40.445956 containerd[1621]: time="2025-05-09T00:17:40.445896110Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 9 00:17:40.895094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount247523652.mount: Deactivated successfully. May 9 00:17:40.899499 containerd[1621]: time="2025-05-09T00:17:40.899451669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:40.900254 containerd[1621]: time="2025-05-09T00:17:40.900205242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 9 00:17:40.901394 containerd[1621]: time="2025-05-09T00:17:40.901345170Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:40.903539 containerd[1621]: time="2025-05-09T00:17:40.903508196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:40.904168 containerd[1621]: time="2025-05-09T00:17:40.904128129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 458.193436ms" May 9 00:17:40.904168 containerd[1621]: time="2025-05-09T00:17:40.904160490Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 9 00:17:40.928515 containerd[1621]: time="2025-05-09T00:17:40.928466973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 9 00:17:41.430019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount264195962.mount: Deactivated successfully. May 9 00:17:43.876919 containerd[1621]: time="2025-05-09T00:17:43.876847485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:43.877859 containerd[1621]: time="2025-05-09T00:17:43.877808081Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 9 00:17:43.879105 containerd[1621]: time="2025-05-09T00:17:43.879066304Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:43.883062 containerd[1621]: time="2025-05-09T00:17:43.883012843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:17:43.884178 containerd[1621]: time="2025-05-09T00:17:43.884115034Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.955604369s" May 9 00:17:43.884178 containerd[1621]: time="2025-05-09T00:17:43.884162396Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 9 00:17:44.668010 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 9 00:17:44.675446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:17:44.826927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:17:44.834716 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:17:44.882756 kubelet[2363]: E0509 00:17:44.882678 2363 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:17:44.888049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:17:44.888456 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:17:47.501502 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:17:47.511610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:17:47.529398 systemd[1]: Reloading requested from client PID 2380 ('systemctl') (unit session-7.scope)... May 9 00:17:47.529420 systemd[1]: Reloading... May 9 00:17:47.688382 zram_generator::config[2419]: No configuration found. May 9 00:17:48.798588 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:17:48.878357 systemd[1]: Reloading finished in 1348 ms. May 9 00:17:48.928421 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 00:17:48.928572 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 00:17:48.929058 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:17:48.931865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:17:49.460492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:17:49.466415 (kubelet)[2479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:17:49.528766 kubelet[2479]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:17:49.528766 kubelet[2479]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:17:49.528766 kubelet[2479]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:17:49.535624 kubelet[2479]: I0509 00:17:49.535557 2479 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:17:49.905732 kubelet[2479]: I0509 00:17:49.905577 2479 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:17:49.905732 kubelet[2479]: I0509 00:17:49.905613 2479 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:17:49.905898 kubelet[2479]: I0509 00:17:49.905876 2479 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:17:49.928391 kubelet[2479]: I0509 00:17:49.928338 2479 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:17:49.931781 kubelet[2479]: E0509 00:17:49.931657 2479 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:49.997701 kubelet[2479]: I0509 00:17:49.997649 2479 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:17:50.015198 kubelet[2479]: I0509 00:17:50.015046 2479 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:17:50.015484 kubelet[2479]: I0509 00:17:50.015144 2479 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:17:50.015690 kubelet[2479]: I0509 00:17:50.015502 2479 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:17:50.015690 kubelet[2479]: I0509 00:17:50.015521 2479 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:17:50.022239 kubelet[2479]: I0509 00:17:50.022158 2479 state_mem.go:36] "Initialized new in-memory state store" May 9 00:17:50.038127 kubelet[2479]: I0509 00:17:50.037940 2479 kubelet.go:400] "Attempting to sync node with API server" May 9 00:17:50.038127 kubelet[2479]: I0509 00:17:50.037999 2479 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:17:50.038127 kubelet[2479]: I0509 00:17:50.038037 2479 kubelet.go:312] "Adding apiserver pod source" May 9 00:17:50.038127 kubelet[2479]: I0509 00:17:50.038061 2479 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:17:50.058560 kubelet[2479]: W0509 00:17:50.058416 2479 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:50.058560 kubelet[2479]: E0509 00:17:50.058565 2479 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:50.062618 kubelet[2479]: W0509 00:17:50.062488 2479 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:50.062618 kubelet[2479]: E0509 00:17:50.062600 2479 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:50.063710 kubelet[2479]: I0509 00:17:50.063663 2479 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 00:17:50.072234 kubelet[2479]: I0509 00:17:50.072157 2479 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:17:50.072465 kubelet[2479]: W0509 00:17:50.072305 2479 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:17:50.074111 kubelet[2479]: I0509 00:17:50.073873 2479 server.go:1264] "Started kubelet" May 9 00:17:50.075926 kubelet[2479]: I0509 00:17:50.075908 2479 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:17:50.076377 kubelet[2479]: I0509 00:17:50.076302 2479 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:17:50.076783 kubelet[2479]: I0509 00:17:50.076760 2479 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:17:50.076836 kubelet[2479]: I0509 00:17:50.076811 2479 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:17:50.078133 kubelet[2479]: I0509 00:17:50.078097 2479 server.go:455] "Adding debug handlers to kubelet server" May 9 00:17:50.079314 kubelet[2479]: E0509 00:17:50.079292 2479 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:17:50.080560 kubelet[2479]: I0509 00:17:50.079402 2479 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:17:50.080560 kubelet[2479]: I0509 00:17:50.079530 2479 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:17:50.080560 kubelet[2479]: I0509 00:17:50.079595 2479 reconciler.go:26] "Reconciler: start to sync state" May 9 00:17:50.080560 kubelet[2479]: W0509 00:17:50.079984 2479 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:50.080560 kubelet[2479]: E0509 00:17:50.080028 2479 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:50.080560 kubelet[2479]: E0509 00:17:50.080433 2479 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="200ms" May 9 00:17:50.081227 kubelet[2479]: I0509 00:17:50.081209 2479 factory.go:221] Registration of the systemd container factory successfully May 9 00:17:50.081424 kubelet[2479]: I0509 00:17:50.081379 2479 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:17:50.082110 kubelet[2479]: E0509 00:17:50.081871 2479 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:17:50.082590 kubelet[2479]: I0509 00:17:50.082571 2479 factory.go:221] Registration of the containerd container factory successfully May 9 00:17:50.087476 kubelet[2479]: E0509 00:17:50.087250 2479 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.125:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.125:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db3c08d35cfd2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:17:50.073827282 +0000 UTC m=+0.602696691,LastTimestamp:2025-05-09 00:17:50.073827282 +0000 UTC m=+0.602696691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:17:50.104418 kubelet[2479]: I0509 00:17:50.104156 2479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:17:50.106819 kubelet[2479]: I0509 00:17:50.106793 2479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:17:50.107666 kubelet[2479]: I0509 00:17:50.106921 2479 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:17:50.107666 kubelet[2479]: I0509 00:17:50.106959 2479 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:17:50.107666 kubelet[2479]: E0509 00:17:50.107018 2479 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:17:50.107666 kubelet[2479]: W0509 00:17:50.107603 2479 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:50.107666 kubelet[2479]: E0509 00:17:50.107639 2479 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:50.114479 kubelet[2479]: I0509 00:17:50.113922 2479 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:17:50.114479 kubelet[2479]: I0509 00:17:50.114017 2479 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:17:50.114479 kubelet[2479]: I0509 00:17:50.114043 2479 state_mem.go:36] "Initialized new in-memory state store" May 9 00:17:50.181788 kubelet[2479]: I0509 00:17:50.181709 2479 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:17:50.182363 kubelet[2479]: E0509 00:17:50.182262 2479 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" May 9 00:17:50.207788 kubelet[2479]: E0509 00:17:50.207692 2479 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:17:50.281952 kubelet[2479]: E0509 00:17:50.281875 2479 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="400ms" May 9 00:17:50.391150 kubelet[2479]: I0509 00:17:50.390720 2479 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:17:50.391771 kubelet[2479]: E0509 00:17:50.391727 2479 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" May 9 00:17:50.408477 kubelet[2479]: E0509 00:17:50.408356 2479 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:17:50.417569 kubelet[2479]: I0509 00:17:50.417003 2479 policy_none.go:49] "None policy: Start" May 9 00:17:50.419664 kubelet[2479]: I0509 00:17:50.419640 2479 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:17:50.420191 kubelet[2479]: I0509 00:17:50.419789 2479 state_mem.go:35] "Initializing new in-memory state store" May 9 00:17:50.444174 kubelet[2479]: I0509 00:17:50.443192 2479 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:17:50.444174 kubelet[2479]: I0509 00:17:50.443566 2479 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:17:50.444174 kubelet[2479]: I0509 00:17:50.443727 2479 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:17:50.449669 kubelet[2479]: E0509 00:17:50.449599 2479 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 00:17:50.683441 kubelet[2479]: E0509 00:17:50.683344 2479 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="800ms" May 9 00:17:50.793529 kubelet[2479]: I0509 00:17:50.793369 2479 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:17:50.793850 kubelet[2479]: E0509 00:17:50.793820 2479 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" May 9 00:17:50.809033 kubelet[2479]: I0509 00:17:50.808967 2479 topology_manager.go:215] "Topology Admit Handler" podUID="aa99550779d530812f4d3937e6f241ae" podNamespace="kube-system" podName="kube-apiserver-localhost" May 9 00:17:50.810170 kubelet[2479]: I0509 00:17:50.810148 2479 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 9 00:17:50.811055 kubelet[2479]: I0509 00:17:50.811022 2479 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 9 00:17:50.884773 kubelet[2479]: I0509 00:17:50.884722 2479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa99550779d530812f4d3937e6f241ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa99550779d530812f4d3937e6f241ae\") " pod="kube-system/kube-apiserver-localhost" May 9 00:17:50.884773 kubelet[2479]: I0509 00:17:50.884769 2479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:17:50.884934 kubelet[2479]: I0509 00:17:50.884867 2479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:17:50.884934 kubelet[2479]: I0509 00:17:50.884915 2479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:17:50.884986 kubelet[2479]: I0509 00:17:50.884937 2479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:17:50.884986 kubelet[2479]: I0509 00:17:50.884957 2479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa99550779d530812f4d3937e6f241ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa99550779d530812f4d3937e6f241ae\") " pod="kube-system/kube-apiserver-localhost" May 9 00:17:50.884986 kubelet[2479]: I0509 00:17:50.884973 2479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa99550779d530812f4d3937e6f241ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa99550779d530812f4d3937e6f241ae\") " pod="kube-system/kube-apiserver-localhost" May 9 00:17:50.885058 kubelet[2479]: I0509 00:17:50.885020 2479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:17:50.885082 kubelet[2479]: I0509 00:17:50.885067 2479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 9 00:17:51.116259 kubelet[2479]: E0509 00:17:51.116092 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:51.116937 kubelet[2479]: E0509 00:17:51.116780 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:51.117140 containerd[1621]: time="2025-05-09T00:17:51.116891725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa99550779d530812f4d3937e6f241ae,Namespace:kube-system,Attempt:0,}" May 9 00:17:51.117140 containerd[1621]: time="2025-05-09T00:17:51.117089983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 9 00:17:51.118662 kubelet[2479]: E0509 00:17:51.118527 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:51.118959 containerd[1621]: time="2025-05-09T00:17:51.118918866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 9 00:17:51.205329 kubelet[2479]: W0509 00:17:51.205210 2479 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:51.205329 kubelet[2479]: E0509 00:17:51.205346 2479 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:51.252728 kubelet[2479]: W0509 00:17:51.252654 2479 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:51.252728 kubelet[2479]: E0509 00:17:51.252737 2479 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:51.399466 kubelet[2479]: W0509 00:17:51.399236 2479 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:51.399466 kubelet[2479]: E0509 00:17:51.399366 2479 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:51.484427 kubelet[2479]: E0509 00:17:51.484355 2479 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="1.6s" May 9 00:17:51.501044 kubelet[2479]: W0509 00:17:51.500953 2479 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:51.501044 kubelet[2479]: E0509 00:17:51.501031 2479 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused May 9 00:17:51.570932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856612267.mount: Deactivated successfully. May 9 00:17:51.580762 containerd[1621]: time="2025-05-09T00:17:51.580684379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:17:51.583821 containerd[1621]: time="2025-05-09T00:17:51.583745045Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 9 00:17:51.584816 containerd[1621]: time="2025-05-09T00:17:51.584757468Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:17:51.586849 containerd[1621]: time="2025-05-09T00:17:51.586795982Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:17:51.587532 containerd[1621]: time="2025-05-09T00:17:51.587484356Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:17:51.588671 containerd[1621]: time="2025-05-09T00:17:51.588622611Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:17:51.590401 containerd[1621]: time="2025-05-09T00:17:51.590368695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:17:51.590674 containerd[1621]: time="2025-05-09T00:17:51.590638561Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:17:51.591249 containerd[1621]: time="2025-05-09T00:17:51.591206976Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 474.04721ms" May 9 00:17:51.595392 containerd[1621]: time="2025-05-09T00:17:51.595142423Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.143042ms" May 9 00:17:51.595925 kubelet[2479]: I0509 00:17:51.595852 2479 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:17:51.596224 kubelet[2479]: E0509 00:17:51.596193 2479 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" May 9 00:17:51.598998 containerd[1621]: time="2025-05-09T00:17:51.598950245Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 479.915518ms" May 9 00:17:51.725140 containerd[1621]: time="2025-05-09T00:17:51.724837469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:17:51.725140 containerd[1621]: time="2025-05-09T00:17:51.724920187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:17:51.725140 containerd[1621]: time="2025-05-09T00:17:51.724934424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:17:51.725140 containerd[1621]: time="2025-05-09T00:17:51.725041659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:17:51.726179 containerd[1621]: time="2025-05-09T00:17:51.726090923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:17:51.726236 containerd[1621]: time="2025-05-09T00:17:51.726180325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:17:51.726236 containerd[1621]: time="2025-05-09T00:17:51.726211815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:17:51.726536 containerd[1621]: time="2025-05-09T00:17:51.726358204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:17:51.726787 containerd[1621]: time="2025-05-09T00:17:51.723546553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:17:51.726787 containerd[1621]: time="2025-05-09T00:17:51.726770311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:17:51.727089 containerd[1621]: time="2025-05-09T00:17:51.726788777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:17:51.727089 containerd[1621]: time="2025-05-09T00:17:51.726890861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:17:51.795129 containerd[1621]: time="2025-05-09T00:17:51.795075959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa99550779d530812f4d3937e6f241ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"79400806ba56d3b23decd081375f03756d2edf6ad1732f7f62f4db8846b67acb\"" May 9 00:17:51.796353 kubelet[2479]: E0509 00:17:51.796262 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:51.800318 containerd[1621]: time="2025-05-09T00:17:51.800259318Z" level=info msg="CreateContainer within sandbox \"79400806ba56d3b23decd081375f03756d2edf6ad1732f7f62f4db8846b67acb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:17:51.800467 containerd[1621]: time="2025-05-09T00:17:51.800337578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"97281ef4efd25a88e678e288417caad4df72f51c2a175a83a30981b868fd281e\"" May 9 00:17:51.800996 kubelet[2479]: E0509 00:17:51.800957 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:51.803046 containerd[1621]: time="2025-05-09T00:17:51.803000965Z" level=info msg="CreateContainer within sandbox \"97281ef4efd25a88e678e288417caad4df72f51c2a175a83a30981b868fd281e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:17:51.804503 containerd[1621]: time="2025-05-09T00:17:51.804414154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"d48d192fc463933d7028ef78c4f508d5dda7de573c14d1529c57b4d4de6faa56\"" May 9 00:17:51.805107 kubelet[2479]: E0509 00:17:51.805077 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:51.806920 containerd[1621]: time="2025-05-09T00:17:51.806864985Z" level=info msg="CreateContainer within sandbox \"d48d192fc463933d7028ef78c4f508d5dda7de573c14d1529c57b4d4de6faa56\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:17:51.888858 containerd[1621]: time="2025-05-09T00:17:51.888785406Z" level=info msg="CreateContainer within sandbox \"97281ef4efd25a88e678e288417caad4df72f51c2a175a83a30981b868fd281e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee6fe4903b1b2c08f377d1d493a72584981d93b62bcea9725bc7e71b1eb44132\"" May 9 00:17:51.889546 containerd[1621]: time="2025-05-09T00:17:51.889510531Z" level=info msg="StartContainer for \"ee6fe4903b1b2c08f377d1d493a72584981d93b62bcea9725bc7e71b1eb44132\"" May 9 00:17:51.890448 containerd[1621]: time="2025-05-09T00:17:51.890394098Z" level=info msg="CreateContainer within sandbox \"79400806ba56d3b23decd081375f03756d2edf6ad1732f7f62f4db8846b67acb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2e7c28e27b98a24690fd430a533880b5c1a5468013b6c712925edfe0b6ebfd07\"" May 9 00:17:51.890931 containerd[1621]: time="2025-05-09T00:17:51.890881008Z" level=info msg="StartContainer for \"2e7c28e27b98a24690fd430a533880b5c1a5468013b6c712925edfe0b6ebfd07\"" May 9 00:17:51.894393 containerd[1621]: time="2025-05-09T00:17:51.894344453Z" level=info msg="CreateContainer within sandbox \"d48d192fc463933d7028ef78c4f508d5dda7de573c14d1529c57b4d4de6faa56\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d2a2b0a6ebfb27356bcf17b1a09c10faf82e16881f5eac263551635bfd108ce4\"" May 9 00:17:51.895507 containerd[1621]: time="2025-05-09T00:17:51.895361075Z" level=info msg="StartContainer for \"d2a2b0a6ebfb27356bcf17b1a09c10faf82e16881f5eac263551635bfd108ce4\"" May 9 00:17:51.993756 containerd[1621]: time="2025-05-09T00:17:51.993423916Z" level=info msg="StartContainer for \"2e7c28e27b98a24690fd430a533880b5c1a5468013b6c712925edfe0b6ebfd07\" returns successfully" May 9 00:17:51.999496 containerd[1621]: time="2025-05-09T00:17:51.999466648Z" level=info msg="StartContainer for \"ee6fe4903b1b2c08f377d1d493a72584981d93b62bcea9725bc7e71b1eb44132\" returns successfully" May 9 00:17:52.007020 containerd[1621]: time="2025-05-09T00:17:52.006948385Z" level=info msg="StartContainer for \"d2a2b0a6ebfb27356bcf17b1a09c10faf82e16881f5eac263551635bfd108ce4\" returns successfully" May 9 00:17:52.118548 kubelet[2479]: E0509 00:17:52.118514 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:52.121667 kubelet[2479]: E0509 00:17:52.121651 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:52.122183 kubelet[2479]: E0509 00:17:52.122170 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:53.127678 kubelet[2479]: E0509 00:17:53.127620 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:53.131130 kubelet[2479]: E0509 00:17:53.131098 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:53.188499 kubelet[2479]: E0509 00:17:53.188434 2479 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 00:17:53.198344 kubelet[2479]: I0509 00:17:53.198267 2479 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:17:53.210521 kubelet[2479]: I0509 00:17:53.210454 2479 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 9 00:17:53.218611 kubelet[2479]: E0509 00:17:53.218548 2479 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:17:53.319438 kubelet[2479]: E0509 00:17:53.319342 2479 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:17:53.353714 kubelet[2479]: E0509 00:17:53.353662 2479 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 9 00:17:53.354000 kubelet[2479]: E0509 00:17:53.353973 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:54.041557 kubelet[2479]: I0509 00:17:54.041513 2479 apiserver.go:52] "Watching apiserver" May 9 00:17:54.080133 kubelet[2479]: I0509 00:17:54.080058 2479 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:17:55.455760 systemd[1]: Reloading requested from client PID 2762 ('systemctl') (unit session-7.scope)... May 9 00:17:55.455779 systemd[1]: Reloading... May 9 00:17:55.530446 zram_generator::config[2806]: No configuration found. May 9 00:17:55.647009 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:17:55.733460 systemd[1]: Reloading finished in 277 ms. May 9 00:17:55.770619 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:17:55.783015 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:17:55.783556 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:17:55.800625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:17:55.956543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:17:55.963673 (kubelet)[2856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:17:56.019013 kubelet[2856]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:17:56.019013 kubelet[2856]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:17:56.019013 kubelet[2856]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:17:56.019013 kubelet[2856]: I0509 00:17:56.018973 2856 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:17:56.024659 kubelet[2856]: I0509 00:17:56.024610 2856 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:17:56.024659 kubelet[2856]: I0509 00:17:56.024639 2856 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:17:56.024890 kubelet[2856]: I0509 00:17:56.024861 2856 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:17:56.026224 kubelet[2856]: I0509 00:17:56.026197 2856 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:17:56.027747 kubelet[2856]: I0509 00:17:56.027709 2856 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:17:56.039229 kubelet[2856]: I0509 00:17:56.039182 2856 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:17:56.040387 kubelet[2856]: I0509 00:17:56.040274 2856 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:17:56.040670 kubelet[2856]: I0509 00:17:56.040377 2856 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:17:56.040802 kubelet[2856]: I0509 00:17:56.040685 2856 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:17:56.040802 kubelet[2856]: I0509 00:17:56.040701 2856 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:17:56.040802 kubelet[2856]: I0509 00:17:56.040758 2856 state_mem.go:36] "Initialized new in-memory state store" May 9 00:17:56.040905 kubelet[2856]: I0509 00:17:56.040888 2856 kubelet.go:400] "Attempting to sync node with API server" May 9 00:17:56.040938 kubelet[2856]: I0509 00:17:56.040907 2856 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:17:56.040938 kubelet[2856]: I0509 00:17:56.040935 2856 kubelet.go:312] "Adding apiserver pod source" May 9 00:17:56.040995 kubelet[2856]: I0509 00:17:56.040960 2856 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:17:56.043264 kubelet[2856]: I0509 00:17:56.043219 2856 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 00:17:56.050330 kubelet[2856]: I0509 00:17:56.043529 2856 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:17:56.050330 kubelet[2856]: I0509 00:17:56.044241 2856 server.go:1264] "Started kubelet" May 9 00:17:56.050330 kubelet[2856]: I0509 00:17:56.045655 2856 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:17:56.050330 kubelet[2856]: I0509 00:17:56.050066 2856 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:17:56.051007 kubelet[2856]: I0509 00:17:56.050912 2856 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:17:56.051358 kubelet[2856]: I0509 00:17:56.051322 2856 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:17:56.051601 kubelet[2856]: I0509 00:17:56.051564 2856 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:17:56.051810 kubelet[2856]: I0509 00:17:56.051790 2856 reconciler.go:26] "Reconciler: start to sync state" May 9 00:17:56.054319 kubelet[2856]: I0509 00:17:56.052531 2856 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:17:56.054319 kubelet[2856]: I0509 00:17:56.053959 2856 server.go:455] "Adding debug handlers to kubelet server" May 9 00:17:56.061033 kubelet[2856]: I0509 00:17:56.060981 2856 factory.go:221] Registration of the systemd container factory successfully May 9 00:17:56.061187 kubelet[2856]: I0509 00:17:56.061105 2856 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:17:56.067351 kubelet[2856]: I0509 00:17:56.067189 2856 factory.go:221] Registration of the containerd container factory successfully May 9 00:17:56.068041 kubelet[2856]: E0509 00:17:56.068005 2856 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:17:56.070035 kubelet[2856]: I0509 00:17:56.069820 2856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:17:56.072063 kubelet[2856]: I0509 00:17:56.072016 2856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:17:56.072063 kubelet[2856]: I0509 00:17:56.072067 2856 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:17:56.072173 kubelet[2856]: I0509 00:17:56.072095 2856 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:17:56.072245 kubelet[2856]: E0509 00:17:56.072160 2856 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:17:56.131534 kubelet[2856]: I0509 00:17:56.131489 2856 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:17:56.131534 kubelet[2856]: I0509 00:17:56.131514 2856 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:17:56.131534 kubelet[2856]: I0509 00:17:56.131538 2856 state_mem.go:36] "Initialized new in-memory state store" May 9 00:17:56.131747 kubelet[2856]: I0509 00:17:56.131728 2856 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:17:56.131775 kubelet[2856]: I0509 00:17:56.131742 2856 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:17:56.131775 kubelet[2856]: I0509 00:17:56.131764 2856 policy_none.go:49] "None policy: Start" May 9 00:17:56.132386 kubelet[2856]: I0509 00:17:56.132370 2856 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:17:56.132435 kubelet[2856]: I0509 00:17:56.132396 2856 state_mem.go:35] "Initializing new in-memory state store" May 9 00:17:56.132618 kubelet[2856]: I0509 00:17:56.132595 2856 state_mem.go:75] "Updated machine memory state" May 9 00:17:56.135890 kubelet[2856]: I0509 00:17:56.134590 2856 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:17:56.135890 kubelet[2856]: I0509 00:17:56.134826 2856 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:17:56.135890 kubelet[2856]: I0509 00:17:56.134944 2856 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:17:56.152393 update_engine[1599]: I20250509 00:17:56.152324 1599 update_attempter.cc:509] Updating boot flags... May 9 00:17:56.156765 kubelet[2856]: I0509 00:17:56.156717 2856 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:17:56.172556 kubelet[2856]: I0509 00:17:56.172477 2856 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 9 00:17:56.172664 kubelet[2856]: I0509 00:17:56.172642 2856 topology_manager.go:215] "Topology Admit Handler" podUID="aa99550779d530812f4d3937e6f241ae" podNamespace="kube-system" podName="kube-apiserver-localhost" May 9 00:17:56.172749 kubelet[2856]: I0509 00:17:56.172719 2856 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 9 00:17:56.349052 kubelet[2856]: I0509 00:17:56.348913 2856 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 9 00:17:56.349052 kubelet[2856]: I0509 00:17:56.349024 2856 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 9 00:17:56.352663 kubelet[2856]: I0509 00:17:56.352591 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 9 00:17:56.352663 kubelet[2856]: I0509 00:17:56.352632 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa99550779d530812f4d3937e6f241ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa99550779d530812f4d3937e6f241ae\") " pod="kube-system/kube-apiserver-localhost" May 9 00:17:56.352663 kubelet[2856]: I0509 00:17:56.352657 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa99550779d530812f4d3937e6f241ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa99550779d530812f4d3937e6f241ae\") " pod="kube-system/kube-apiserver-localhost" May 9 00:17:56.352823 kubelet[2856]: I0509 00:17:56.352675 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:17:56.352823 kubelet[2856]: I0509 00:17:56.352689 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:17:56.352823 kubelet[2856]: I0509 00:17:56.352704 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa99550779d530812f4d3937e6f241ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa99550779d530812f4d3937e6f241ae\") " pod="kube-system/kube-apiserver-localhost" May 9 00:17:56.352823 kubelet[2856]: I0509 00:17:56.352717 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:17:56.352823 kubelet[2856]: I0509 00:17:56.352733 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:17:56.352943 kubelet[2856]: I0509 00:17:56.352747 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:17:56.485330 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2895) May 9 00:17:56.492956 sudo[2901]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 00:17:56.493506 sudo[2901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 00:17:56.530329 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2894) May 9 00:17:56.571336 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2894) May 9 00:17:56.638862 kubelet[2856]: E0509 00:17:56.638125 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:56.639762 kubelet[2856]: E0509 00:17:56.639722 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:56.640481 kubelet[2856]: E0509 00:17:56.640452 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:57.008226 sudo[2901]: pam_unix(sudo:session): session closed for user root May 9 00:17:57.042648 kubelet[2856]: I0509 00:17:57.042589 2856 apiserver.go:52] "Watching apiserver" May 9 00:17:57.052195 kubelet[2856]: I0509 00:17:57.052154 2856 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:17:57.091966 kubelet[2856]: E0509 00:17:57.091921 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:57.094448 kubelet[2856]: E0509 00:17:57.092873 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:57.102879 kubelet[2856]: E0509 00:17:57.101131 2856 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 00:17:57.102879 kubelet[2856]: E0509 00:17:57.101701 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:57.136850 kubelet[2856]: I0509 00:17:57.136713 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.136693929 podStartE2EDuration="1.136693929s" podCreationTimestamp="2025-05-09 00:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:17:57.136351529 +0000 UTC m=+1.167647499" watchObservedRunningTime="2025-05-09 00:17:57.136693929 +0000 UTC m=+1.167989879" May 9 00:17:57.154326 kubelet[2856]: I0509 00:17:57.154082 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.154049074 podStartE2EDuration="1.154049074s" podCreationTimestamp="2025-05-09 00:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:17:57.153685394 +0000 UTC m=+1.184981354" watchObservedRunningTime="2025-05-09 00:17:57.154049074 +0000 UTC m=+1.185345024" May 9 00:17:57.154326 kubelet[2856]: I0509 00:17:57.154155 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.154151339 podStartE2EDuration="1.154151339s" podCreationTimestamp="2025-05-09 00:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:17:57.146159073 +0000 UTC m=+1.177455023" watchObservedRunningTime="2025-05-09 00:17:57.154151339 +0000 UTC m=+1.185447289" May 9 00:17:58.092777 kubelet[2856]: E0509 00:17:58.092739 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:17:58.471220 sudo[1809]: pam_unix(sudo:session): session closed for user root May 9 00:17:58.473201 sshd[1808]: Connection closed by 10.0.0.1 port 49658 May 9 00:17:58.473750 sshd-session[1802]: pam_unix(sshd:session): session closed for user core May 9 00:17:58.478793 systemd[1]: sshd@6-10.0.0.125:22-10.0.0.1:49658.service: Deactivated successfully. May 9 00:17:58.481923 systemd-logind[1597]: Session 7 logged out. Waiting for processes to exit. May 9 00:17:58.482108 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:17:58.483681 systemd-logind[1597]: Removed session 7. May 9 00:17:59.097166 kubelet[2856]: E0509 00:17:59.097055 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:02.646299 kubelet[2856]: E0509 00:18:02.646207 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:03.100945 kubelet[2856]: E0509 00:18:03.100908 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:04.963531 kubelet[2856]: E0509 00:18:04.963486 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:05.103565 kubelet[2856]: E0509 00:18:05.103528 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:06.103749 kubelet[2856]: E0509 00:18:06.103686 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:07.107377 kubelet[2856]: E0509 00:18:07.107275 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:08.107984 kubelet[2856]: E0509 00:18:08.107942 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:10.357800 kubelet[2856]: I0509 00:18:10.357748 2856 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:18:10.358454 kubelet[2856]: I0509 00:18:10.358394 2856 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:18:10.358506 containerd[1621]: time="2025-05-09T00:18:10.358107304Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:18:11.272555 kubelet[2856]: I0509 00:18:11.272270 2856 topology_manager.go:215] "Topology Admit Handler" podUID="a7756c0d-f988-4f9d-9541-f0fdc67842c4" podNamespace="kube-system" podName="kube-proxy-cvwlc" May 9 00:18:11.304199 kubelet[2856]: I0509 00:18:11.304125 2856 topology_manager.go:215] "Topology Admit Handler" podUID="400348e2-f9bf-42cc-81a5-ec19aa5c53f7" podNamespace="kube-system" podName="cilium-hj268" May 9 00:18:11.339372 kubelet[2856]: I0509 00:18:11.335815 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-bpf-maps\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339372 kubelet[2856]: I0509 00:18:11.335878 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a7756c0d-f988-4f9d-9541-f0fdc67842c4-kube-proxy\") pod \"kube-proxy-cvwlc\" (UID: \"a7756c0d-f988-4f9d-9541-f0fdc67842c4\") " pod="kube-system/kube-proxy-cvwlc" May 9 00:18:11.339372 kubelet[2856]: I0509 00:18:11.335905 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cilium-config-path\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339372 kubelet[2856]: I0509 00:18:11.335925 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7756c0d-f988-4f9d-9541-f0fdc67842c4-lib-modules\") pod \"kube-proxy-cvwlc\" (UID: \"a7756c0d-f988-4f9d-9541-f0fdc67842c4\") " pod="kube-system/kube-proxy-cvwlc" May 9 00:18:11.339372 kubelet[2856]: I0509 00:18:11.335944 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-hostproc\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339372 kubelet[2856]: I0509 00:18:11.335964 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-host-proc-sys-kernel\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339740 kubelet[2856]: I0509 00:18:11.335984 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cilium-run\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339740 kubelet[2856]: I0509 00:18:11.336002 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cni-path\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339740 kubelet[2856]: I0509 00:18:11.336020 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-lib-modules\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339740 kubelet[2856]: I0509 00:18:11.336037 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-clustermesh-secrets\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339740 kubelet[2856]: I0509 00:18:11.336056 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp4nt\" (UniqueName: \"kubernetes.io/projected/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-kube-api-access-fp4nt\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339740 kubelet[2856]: I0509 00:18:11.336074 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7756c0d-f988-4f9d-9541-f0fdc67842c4-xtables-lock\") pod \"kube-proxy-cvwlc\" (UID: \"a7756c0d-f988-4f9d-9541-f0fdc67842c4\") " pod="kube-system/kube-proxy-cvwlc" May 9 00:18:11.339986 kubelet[2856]: I0509 00:18:11.336095 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llk8h\" (UniqueName: \"kubernetes.io/projected/a7756c0d-f988-4f9d-9541-f0fdc67842c4-kube-api-access-llk8h\") pod \"kube-proxy-cvwlc\" (UID: \"a7756c0d-f988-4f9d-9541-f0fdc67842c4\") " pod="kube-system/kube-proxy-cvwlc" May 9 00:18:11.339986 kubelet[2856]: I0509 00:18:11.336114 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-etc-cni-netd\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339986 kubelet[2856]: I0509 00:18:11.336134 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-xtables-lock\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339986 kubelet[2856]: I0509 00:18:11.336153 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-host-proc-sys-net\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339986 kubelet[2856]: I0509 00:18:11.336171 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-hubble-tls\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.339986 kubelet[2856]: I0509 00:18:11.336194 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cilium-cgroup\") pod \"cilium-hj268\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " pod="kube-system/cilium-hj268" May 9 00:18:11.350974 kubelet[2856]: I0509 00:18:11.350902 2856 topology_manager.go:215] "Topology Admit Handler" podUID="f1c5feda-33a4-43b1-803c-af7e33beef5d" podNamespace="kube-system" podName="cilium-operator-599987898-pmqr9" May 9 00:18:11.437340 kubelet[2856]: I0509 00:18:11.436985 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1c5feda-33a4-43b1-803c-af7e33beef5d-cilium-config-path\") pod \"cilium-operator-599987898-pmqr9\" (UID: \"f1c5feda-33a4-43b1-803c-af7e33beef5d\") " pod="kube-system/cilium-operator-599987898-pmqr9" May 9 00:18:11.437340 kubelet[2856]: I0509 00:18:11.437124 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5x7h\" (UniqueName: \"kubernetes.io/projected/f1c5feda-33a4-43b1-803c-af7e33beef5d-kube-api-access-x5x7h\") pod \"cilium-operator-599987898-pmqr9\" (UID: \"f1c5feda-33a4-43b1-803c-af7e33beef5d\") " pod="kube-system/cilium-operator-599987898-pmqr9" May 9 00:18:11.578704 kubelet[2856]: E0509 00:18:11.578516 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:11.579438 containerd[1621]: time="2025-05-09T00:18:11.579211534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvwlc,Uid:a7756c0d-f988-4f9d-9541-f0fdc67842c4,Namespace:kube-system,Attempt:0,}" May 9 00:18:11.614387 kubelet[2856]: E0509 00:18:11.614266 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:11.615070 containerd[1621]: time="2025-05-09T00:18:11.615003016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hj268,Uid:400348e2-f9bf-42cc-81a5-ec19aa5c53f7,Namespace:kube-system,Attempt:0,}" May 9 00:18:11.656541 kubelet[2856]: E0509 00:18:11.656480 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:11.657117 containerd[1621]: time="2025-05-09T00:18:11.657062562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pmqr9,Uid:f1c5feda-33a4-43b1-803c-af7e33beef5d,Namespace:kube-system,Attempt:0,}" May 9 00:18:12.667854 containerd[1621]: time="2025-05-09T00:18:12.667686383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:18:12.667854 containerd[1621]: time="2025-05-09T00:18:12.667817420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:18:12.669782 containerd[1621]: time="2025-05-09T00:18:12.669400905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:18:12.669782 containerd[1621]: time="2025-05-09T00:18:12.669530930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:18:12.669782 containerd[1621]: time="2025-05-09T00:18:12.669543553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:18:12.669782 containerd[1621]: time="2025-05-09T00:18:12.669376187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:18:12.669782 containerd[1621]: time="2025-05-09T00:18:12.669514098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:18:12.670814 containerd[1621]: time="2025-05-09T00:18:12.670719158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:18:12.681348 containerd[1621]: time="2025-05-09T00:18:12.679600043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:18:12.681348 containerd[1621]: time="2025-05-09T00:18:12.679698076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:18:12.681348 containerd[1621]: time="2025-05-09T00:18:12.679722784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:18:12.681348 containerd[1621]: time="2025-05-09T00:18:12.679832330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:18:12.753020 containerd[1621]: time="2025-05-09T00:18:12.752953960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hj268,Uid:400348e2-f9bf-42cc-81a5-ec19aa5c53f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\"" May 9 00:18:12.757438 kubelet[2856]: E0509 00:18:12.757403 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:12.760786 containerd[1621]: time="2025-05-09T00:18:12.760735774Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 00:18:12.761374 containerd[1621]: time="2025-05-09T00:18:12.761345703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvwlc,Uid:a7756c0d-f988-4f9d-9541-f0fdc67842c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b6d313ba7a24bdf8c2791fd444b8224395577deabecead0a9c4826807cf418d\"" May 9 00:18:12.762707 kubelet[2856]: E0509 00:18:12.762663 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:12.769203 containerd[1621]: time="2025-05-09T00:18:12.769046603Z" level=info msg="CreateContainer within sandbox \"3b6d313ba7a24bdf8c2791fd444b8224395577deabecead0a9c4826807cf418d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:18:12.791684 containerd[1621]: time="2025-05-09T00:18:12.791626330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pmqr9,Uid:f1c5feda-33a4-43b1-803c-af7e33beef5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"441ee725f709b9d936ca2db9c5ab74864cea258a0dc446def95359e98cd8a744\"" May 9 00:18:12.793223 kubelet[2856]: E0509 00:18:12.793195 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:12.803780 containerd[1621]: time="2025-05-09T00:18:12.803663801Z" level=info msg="CreateContainer within sandbox \"3b6d313ba7a24bdf8c2791fd444b8224395577deabecead0a9c4826807cf418d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24a3a01610452512bc639b60bb300e192bf109f010d4998a64fd01fac7e0e933\"" May 9 00:18:12.804546 containerd[1621]: time="2025-05-09T00:18:12.804387586Z" level=info msg="StartContainer for \"24a3a01610452512bc639b60bb300e192bf109f010d4998a64fd01fac7e0e933\"" May 9 00:18:12.893188 containerd[1621]: time="2025-05-09T00:18:12.893124580Z" level=info msg="StartContainer for \"24a3a01610452512bc639b60bb300e192bf109f010d4998a64fd01fac7e0e933\" returns successfully" May 9 00:18:13.126809 kubelet[2856]: E0509 00:18:13.126436 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:13.138892 kubelet[2856]: I0509 00:18:13.138130 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cvwlc" podStartSLOduration=3.138099182 podStartE2EDuration="3.138099182s" podCreationTimestamp="2025-05-09 00:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:18:13.137509741 +0000 UTC m=+17.168805701" watchObservedRunningTime="2025-05-09 00:18:13.138099182 +0000 UTC m=+17.169395132" May 9 00:18:18.825060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1721437855.mount: Deactivated successfully. May 9 00:18:20.742628 systemd[1]: Started sshd@7-10.0.0.125:22-10.0.0.1:55454.service - OpenSSH per-connection server daemon (10.0.0.1:55454). May 9 00:18:20.777803 sshd[3248]: Accepted publickey for core from 10.0.0.1 port 55454 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:18:20.779687 sshd-session[3248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:20.784708 systemd-logind[1597]: New session 8 of user core. May 9 00:18:20.794583 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:18:21.132109 sshd[3251]: Connection closed by 10.0.0.1 port 55454 May 9 00:18:21.132459 sshd-session[3248]: pam_unix(sshd:session): session closed for user core May 9 00:18:21.136962 systemd[1]: sshd@7-10.0.0.125:22-10.0.0.1:55454.service: Deactivated successfully. May 9 00:18:21.139653 systemd-logind[1597]: Session 8 logged out. Waiting for processes to exit. May 9 00:18:21.140383 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:18:21.141328 systemd-logind[1597]: Removed session 8. May 9 00:18:24.598195 containerd[1621]: time="2025-05-09T00:18:24.598127392Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:18:24.598925 containerd[1621]: time="2025-05-09T00:18:24.598881980Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 9 00:18:24.600372 containerd[1621]: time="2025-05-09T00:18:24.600336123Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:18:24.602441 containerd[1621]: time="2025-05-09T00:18:24.602246123Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.841332354s" May 9 00:18:24.602441 containerd[1621]: time="2025-05-09T00:18:24.602313170Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 9 00:18:24.605651 containerd[1621]: time="2025-05-09T00:18:24.605612951Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 00:18:24.608590 containerd[1621]: time="2025-05-09T00:18:24.608542758Z" level=info msg="CreateContainer within sandbox \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:18:24.627078 containerd[1621]: time="2025-05-09T00:18:24.627032969Z" level=info msg="CreateContainer within sandbox \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4\"" May 9 00:18:24.628614 containerd[1621]: time="2025-05-09T00:18:24.627702678Z" level=info msg="StartContainer for \"d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4\"" May 9 00:18:24.687710 containerd[1621]: time="2025-05-09T00:18:24.687657451Z" level=info msg="StartContainer for \"d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4\" returns successfully" May 9 00:18:25.248308 kubelet[2856]: E0509 00:18:25.248258 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:25.623920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4-rootfs.mount: Deactivated successfully. May 9 00:18:25.945310 containerd[1621]: time="2025-05-09T00:18:25.945209543Z" level=info msg="shim disconnected" id=d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4 namespace=k8s.io May 9 00:18:25.945310 containerd[1621]: time="2025-05-09T00:18:25.945276348Z" level=warning msg="cleaning up after shim disconnected" id=d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4 namespace=k8s.io May 9 00:18:25.945310 containerd[1621]: time="2025-05-09T00:18:25.945305634Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:18:26.145603 systemd[1]: Started sshd@8-10.0.0.125:22-10.0.0.1:55458.service - OpenSSH per-connection server daemon (10.0.0.1:55458). May 9 00:18:26.205028 sshd[3343]: Accepted publickey for core from 10.0.0.1 port 55458 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:18:26.206889 sshd-session[3343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:26.211751 systemd-logind[1597]: New session 9 of user core. May 9 00:18:26.219567 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:18:26.250632 kubelet[2856]: E0509 00:18:26.250601 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:26.252810 containerd[1621]: time="2025-05-09T00:18:26.252765895Z" level=info msg="CreateContainer within sandbox \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:18:26.443201 sshd[3346]: Connection closed by 10.0.0.1 port 55458 May 9 00:18:26.443686 sshd-session[3343]: pam_unix(sshd:session): session closed for user core May 9 00:18:26.449211 systemd[1]: sshd@8-10.0.0.125:22-10.0.0.1:55458.service: Deactivated successfully. May 9 00:18:26.452322 systemd-logind[1597]: Session 9 logged out. Waiting for processes to exit. May 9 00:18:26.452507 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:18:26.461314 systemd-logind[1597]: Removed session 9. May 9 00:18:26.464635 containerd[1621]: time="2025-05-09T00:18:26.464579557Z" level=info msg="CreateContainer within sandbox \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c\"" May 9 00:18:26.466321 containerd[1621]: time="2025-05-09T00:18:26.465219348Z" level=info msg="StartContainer for \"5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c\"" May 9 00:18:26.527742 containerd[1621]: time="2025-05-09T00:18:26.527674830Z" level=info msg="StartContainer for \"5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c\" returns successfully" May 9 00:18:26.539264 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:18:26.540118 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:18:26.540208 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 00:18:26.549058 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:18:26.570231 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:18:26.573720 containerd[1621]: time="2025-05-09T00:18:26.573656638Z" level=info msg="shim disconnected" id=5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c namespace=k8s.io May 9 00:18:26.573860 containerd[1621]: time="2025-05-09T00:18:26.573724936Z" level=warning msg="cleaning up after shim disconnected" id=5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c namespace=k8s.io May 9 00:18:26.573860 containerd[1621]: time="2025-05-09T00:18:26.573740375Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:18:26.624500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c-rootfs.mount: Deactivated successfully. May 9 00:18:27.254802 kubelet[2856]: E0509 00:18:27.254759 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:27.257241 containerd[1621]: time="2025-05-09T00:18:27.257202852Z" level=info msg="CreateContainer within sandbox \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:18:27.281554 containerd[1621]: time="2025-05-09T00:18:27.281499108Z" level=info msg="CreateContainer within sandbox \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786\"" May 9 00:18:27.282146 containerd[1621]: time="2025-05-09T00:18:27.282102932Z" level=info msg="StartContainer for \"a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786\"" May 9 00:18:27.349670 containerd[1621]: time="2025-05-09T00:18:27.349551160Z" level=info msg="StartContainer for \"a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786\" returns successfully" May 9 00:18:27.380214 containerd[1621]: time="2025-05-09T00:18:27.380150359Z" level=info msg="shim disconnected" id=a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786 namespace=k8s.io May 9 00:18:27.380214 containerd[1621]: time="2025-05-09T00:18:27.380212886Z" level=warning msg="cleaning up after shim disconnected" id=a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786 namespace=k8s.io May 9 00:18:27.380214 containerd[1621]: time="2025-05-09T00:18:27.380222374Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:18:27.395967 containerd[1621]: time="2025-05-09T00:18:27.395875492Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:18:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:18:27.623650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786-rootfs.mount: Deactivated successfully. May 9 00:18:27.978633 containerd[1621]: time="2025-05-09T00:18:27.978578941Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:18:27.979636 containerd[1621]: time="2025-05-09T00:18:27.979593998Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 9 00:18:27.981205 containerd[1621]: time="2025-05-09T00:18:27.981162746Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:18:27.982555 containerd[1621]: time="2025-05-09T00:18:27.982513633Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.376860095s" May 9 00:18:27.982555 containerd[1621]: time="2025-05-09T00:18:27.982544631Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 9 00:18:27.985230 containerd[1621]: time="2025-05-09T00:18:27.985198036Z" level=info msg="CreateContainer within sandbox \"441ee725f709b9d936ca2db9c5ab74864cea258a0dc446def95359e98cd8a744\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 00:18:27.999563 containerd[1621]: time="2025-05-09T00:18:27.999512580Z" level=info msg="CreateContainer within sandbox \"441ee725f709b9d936ca2db9c5ab74864cea258a0dc446def95359e98cd8a744\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\"" May 9 00:18:28.000533 containerd[1621]: time="2025-05-09T00:18:28.000491449Z" level=info msg="StartContainer for \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\"" May 9 00:18:28.071181 containerd[1621]: time="2025-05-09T00:18:28.071125317Z" level=info msg="StartContainer for \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\" returns successfully" May 9 00:18:28.258603 kubelet[2856]: E0509 00:18:28.258206 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:28.264523 kubelet[2856]: E0509 00:18:28.264494 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:28.266705 containerd[1621]: time="2025-05-09T00:18:28.266654404Z" level=info msg="CreateContainer within sandbox \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:18:28.267966 kubelet[2856]: I0509 00:18:28.267638 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-pmqr9" podStartSLOduration=2.081925584 podStartE2EDuration="17.267619256s" podCreationTimestamp="2025-05-09 00:18:11 +0000 UTC" firstStartedPulling="2025-05-09 00:18:12.797958711 +0000 UTC m=+16.829254661" lastFinishedPulling="2025-05-09 00:18:27.983652393 +0000 UTC m=+32.014948333" observedRunningTime="2025-05-09 00:18:28.267377443 +0000 UTC m=+32.298673393" watchObservedRunningTime="2025-05-09 00:18:28.267619256 +0000 UTC m=+32.298915216" May 9 00:18:28.505326 containerd[1621]: time="2025-05-09T00:18:28.505004121Z" level=info msg="CreateContainer within sandbox \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d\"" May 9 00:18:28.507316 containerd[1621]: time="2025-05-09T00:18:28.506557499Z" level=info msg="StartContainer for \"f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d\"" May 9 00:18:28.590418 containerd[1621]: time="2025-05-09T00:18:28.590244077Z" level=info msg="StartContainer for \"f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d\" returns successfully" May 9 00:18:28.645432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d-rootfs.mount: Deactivated successfully. May 9 00:18:28.656578 containerd[1621]: time="2025-05-09T00:18:28.656488900Z" level=info msg="shim disconnected" id=f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d namespace=k8s.io May 9 00:18:28.656578 containerd[1621]: time="2025-05-09T00:18:28.656559513Z" level=warning msg="cleaning up after shim disconnected" id=f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d namespace=k8s.io May 9 00:18:28.656578 containerd[1621]: time="2025-05-09T00:18:28.656567928Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:18:29.271685 kubelet[2856]: E0509 00:18:29.271366 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:29.271685 kubelet[2856]: E0509 00:18:29.271367 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:29.274405 containerd[1621]: time="2025-05-09T00:18:29.274275837Z" level=info msg="CreateContainer within sandbox \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:18:29.295361 containerd[1621]: time="2025-05-09T00:18:29.295311860Z" level=info msg="CreateContainer within sandbox \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\"" May 9 00:18:29.296125 containerd[1621]: time="2025-05-09T00:18:29.295919681Z" level=info msg="StartContainer for \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\"" May 9 00:18:29.366069 containerd[1621]: time="2025-05-09T00:18:29.366017709Z" level=info msg="StartContainer for \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\" returns successfully" May 9 00:18:29.546423 kubelet[2856]: I0509 00:18:29.546306 2856 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 00:18:29.693893 kubelet[2856]: I0509 00:18:29.693830 2856 topology_manager.go:215] "Topology Admit Handler" podUID="1b83a217-cbe3-4a9f-bfbc-69ee3b1377e2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fjsxc" May 9 00:18:29.695690 kubelet[2856]: I0509 00:18:29.695660 2856 topology_manager.go:215] "Topology Admit Handler" podUID="037ac6b9-60ef-4657-ac24-35a0ca89c8da" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xqhds" May 9 00:18:29.884891 kubelet[2856]: I0509 00:18:29.884611 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szqk4\" (UniqueName: \"kubernetes.io/projected/1b83a217-cbe3-4a9f-bfbc-69ee3b1377e2-kube-api-access-szqk4\") pod \"coredns-7db6d8ff4d-fjsxc\" (UID: \"1b83a217-cbe3-4a9f-bfbc-69ee3b1377e2\") " pod="kube-system/coredns-7db6d8ff4d-fjsxc" May 9 00:18:29.884891 kubelet[2856]: I0509 00:18:29.884654 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b83a217-cbe3-4a9f-bfbc-69ee3b1377e2-config-volume\") pod \"coredns-7db6d8ff4d-fjsxc\" (UID: \"1b83a217-cbe3-4a9f-bfbc-69ee3b1377e2\") " pod="kube-system/coredns-7db6d8ff4d-fjsxc" May 9 00:18:29.884891 kubelet[2856]: I0509 00:18:29.884710 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/037ac6b9-60ef-4657-ac24-35a0ca89c8da-config-volume\") pod \"coredns-7db6d8ff4d-xqhds\" (UID: \"037ac6b9-60ef-4657-ac24-35a0ca89c8da\") " pod="kube-system/coredns-7db6d8ff4d-xqhds" May 9 00:18:29.884891 kubelet[2856]: I0509 00:18:29.884738 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrwx\" (UniqueName: \"kubernetes.io/projected/037ac6b9-60ef-4657-ac24-35a0ca89c8da-kube-api-access-mxrwx\") pod \"coredns-7db6d8ff4d-xqhds\" (UID: \"037ac6b9-60ef-4657-ac24-35a0ca89c8da\") " pod="kube-system/coredns-7db6d8ff4d-xqhds" May 9 00:18:30.005695 kubelet[2856]: E0509 00:18:30.005650 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:30.007964 containerd[1621]: time="2025-05-09T00:18:30.007910244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xqhds,Uid:037ac6b9-60ef-4657-ac24-35a0ca89c8da,Namespace:kube-system,Attempt:0,}" May 9 00:18:30.010748 kubelet[2856]: E0509 00:18:30.010705 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:30.011319 containerd[1621]: time="2025-05-09T00:18:30.011233877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fjsxc,Uid:1b83a217-cbe3-4a9f-bfbc-69ee3b1377e2,Namespace:kube-system,Attempt:0,}" May 9 00:18:30.293417 kubelet[2856]: E0509 00:18:30.293379 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:30.310643 kubelet[2856]: I0509 00:18:30.310561 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hj268" podStartSLOduration=7.46468181 podStartE2EDuration="19.310541511s" podCreationTimestamp="2025-05-09 00:18:11 +0000 UTC" firstStartedPulling="2025-05-09 00:18:12.759366895 +0000 UTC m=+16.790662845" lastFinishedPulling="2025-05-09 00:18:24.605226596 +0000 UTC m=+28.636522546" observedRunningTime="2025-05-09 00:18:30.309677888 +0000 UTC m=+34.340973848" watchObservedRunningTime="2025-05-09 00:18:30.310541511 +0000 UTC m=+34.341837461" May 9 00:18:31.294884 kubelet[2856]: E0509 00:18:31.294830 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:31.456506 systemd[1]: Started sshd@9-10.0.0.125:22-10.0.0.1:50436.service - OpenSSH per-connection server daemon (10.0.0.1:50436). May 9 00:18:31.492961 sshd[3732]: Accepted publickey for core from 10.0.0.1 port 50436 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:18:31.494760 sshd-session[3732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:31.498727 systemd-logind[1597]: New session 10 of user core. May 9 00:18:31.512596 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:18:31.589608 systemd-networkd[1247]: cilium_host: Link UP May 9 00:18:31.590535 systemd-networkd[1247]: cilium_net: Link UP May 9 00:18:31.591233 systemd-networkd[1247]: cilium_net: Gained carrier May 9 00:18:31.592265 systemd-networkd[1247]: cilium_host: Gained carrier May 9 00:18:31.592497 systemd-networkd[1247]: cilium_net: Gained IPv6LL May 9 00:18:31.592687 systemd-networkd[1247]: cilium_host: Gained IPv6LL May 9 00:18:31.709171 systemd-networkd[1247]: cilium_vxlan: Link UP May 9 00:18:31.709181 systemd-networkd[1247]: cilium_vxlan: Gained carrier May 9 00:18:31.713145 sshd[3735]: Connection closed by 10.0.0.1 port 50436 May 9 00:18:31.713527 sshd-session[3732]: pam_unix(sshd:session): session closed for user core May 9 00:18:31.718585 systemd[1]: sshd@9-10.0.0.125:22-10.0.0.1:50436.service: Deactivated successfully. May 9 00:18:31.721420 systemd-logind[1597]: Session 10 logged out. Waiting for processes to exit. May 9 00:18:31.724611 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:18:31.725812 systemd-logind[1597]: Removed session 10. May 9 00:18:31.953315 kernel: NET: Registered PF_ALG protocol family May 9 00:18:32.296869 kubelet[2856]: E0509 00:18:32.296743 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:32.658515 systemd-networkd[1247]: lxc_health: Link UP May 9 00:18:32.666406 systemd-networkd[1247]: lxc_health: Gained carrier May 9 00:18:33.110825 systemd-networkd[1247]: lxcd994a15cd6e8: Link UP May 9 00:18:33.120508 systemd-networkd[1247]: lxcd5193f123daf: Link UP May 9 00:18:33.133320 kernel: eth0: renamed from tmp13903 May 9 00:18:33.137901 systemd-networkd[1247]: lxcd994a15cd6e8: Gained carrier May 9 00:18:33.139455 kernel: eth0: renamed from tmp376ac May 9 00:18:33.148193 systemd-networkd[1247]: lxcd5193f123daf: Gained carrier May 9 00:18:33.175550 systemd-networkd[1247]: cilium_vxlan: Gained IPv6LL May 9 00:18:33.617683 kubelet[2856]: E0509 00:18:33.617646 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:34.647501 systemd-networkd[1247]: lxc_health: Gained IPv6LL May 9 00:18:34.777521 systemd-networkd[1247]: lxcd5193f123daf: Gained IPv6LL May 9 00:18:34.903471 systemd-networkd[1247]: lxcd994a15cd6e8: Gained IPv6LL May 9 00:18:36.716042 containerd[1621]: time="2025-05-09T00:18:36.715684266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:18:36.716042 containerd[1621]: time="2025-05-09T00:18:36.715787430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:18:36.716042 containerd[1621]: time="2025-05-09T00:18:36.715814150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:18:36.716042 containerd[1621]: time="2025-05-09T00:18:36.715932733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:18:36.720309 containerd[1621]: time="2025-05-09T00:18:36.718475296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:18:36.720309 containerd[1621]: time="2025-05-09T00:18:36.718584331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:18:36.720309 containerd[1621]: time="2025-05-09T00:18:36.718601373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:18:36.720309 containerd[1621]: time="2025-05-09T00:18:36.718731336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:18:36.724727 systemd[1]: Started sshd@10-10.0.0.125:22-10.0.0.1:50438.service - OpenSSH per-connection server daemon (10.0.0.1:50438). May 9 00:18:36.754609 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:18:36.785727 containerd[1621]: time="2025-05-09T00:18:36.785595698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fjsxc,Uid:1b83a217-cbe3-4a9f-bfbc-69ee3b1377e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"376acf697e4db88b3f0b50f2ee357d42d0c47df82e523ec2e37358d19be9dc2f\"" May 9 00:18:36.787200 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 50438 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:18:36.788597 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:36.790034 kubelet[2856]: E0509 00:18:36.789996 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:36.793782 systemd-logind[1597]: New session 11 of user core. May 9 00:18:36.796731 containerd[1621]: time="2025-05-09T00:18:36.796690606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xqhds,Uid:037ac6b9-60ef-4657-ac24-35a0ca89c8da,Namespace:kube-system,Attempt:0,} returns sandbox id \"13903d1e8d18c1cff1d4e6353eb710d801d0266aa49258db14a9ddfdbc19427f\"" May 9 00:18:36.799844 kubelet[2856]: E0509 00:18:36.798944 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:36.800691 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:18:36.805419 containerd[1621]: time="2025-05-09T00:18:36.805377142Z" level=info msg="CreateContainer within sandbox \"376acf697e4db88b3f0b50f2ee357d42d0c47df82e523ec2e37358d19be9dc2f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:18:36.807205 containerd[1621]: time="2025-05-09T00:18:36.807132237Z" level=info msg="CreateContainer within sandbox \"13903d1e8d18c1cff1d4e6353eb710d801d0266aa49258db14a9ddfdbc19427f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:18:36.851451 containerd[1621]: time="2025-05-09T00:18:36.851265666Z" level=info msg="CreateContainer within sandbox \"13903d1e8d18c1cff1d4e6353eb710d801d0266aa49258db14a9ddfdbc19427f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6eac068d13d37551082ff16888ecd1273ddb714aad9ee6c6b46b438148acd26e\"" May 9 00:18:36.852167 containerd[1621]: time="2025-05-09T00:18:36.852053776Z" level=info msg="StartContainer for \"6eac068d13d37551082ff16888ecd1273ddb714aad9ee6c6b46b438148acd26e\"" May 9 00:18:36.860388 containerd[1621]: time="2025-05-09T00:18:36.860276463Z" level=info msg="CreateContainer within sandbox \"376acf697e4db88b3f0b50f2ee357d42d0c47df82e523ec2e37358d19be9dc2f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41683546b8cd3dbd8ead1f7f420860212bb6b408936d3b8948d8129ddf0b05a1\"" May 9 00:18:36.862620 containerd[1621]: time="2025-05-09T00:18:36.862584355Z" level=info msg="StartContainer for \"41683546b8cd3dbd8ead1f7f420860212bb6b408936d3b8948d8129ddf0b05a1\"" May 9 00:18:36.943209 containerd[1621]: time="2025-05-09T00:18:36.943146150Z" level=info msg="StartContainer for \"6eac068d13d37551082ff16888ecd1273ddb714aad9ee6c6b46b438148acd26e\" returns successfully" May 9 00:18:36.943398 containerd[1621]: time="2025-05-09T00:18:36.943160086Z" level=info msg="StartContainer for \"41683546b8cd3dbd8ead1f7f420860212bb6b408936d3b8948d8129ddf0b05a1\" returns successfully" May 9 00:18:36.968355 sshd[4224]: Connection closed by 10.0.0.1 port 50438 May 9 00:18:36.969105 sshd-session[4171]: pam_unix(sshd:session): session closed for user core May 9 00:18:36.974189 systemd[1]: sshd@10-10.0.0.125:22-10.0.0.1:50438.service: Deactivated successfully. May 9 00:18:36.976826 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:18:36.976861 systemd-logind[1597]: Session 11 logged out. Waiting for processes to exit. May 9 00:18:36.978145 systemd-logind[1597]: Removed session 11. May 9 00:18:37.308425 kubelet[2856]: E0509 00:18:37.308025 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:37.311130 kubelet[2856]: E0509 00:18:37.311015 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:37.324432 kubelet[2856]: I0509 00:18:37.324092 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fjsxc" podStartSLOduration=26.324073647 podStartE2EDuration="26.324073647s" podCreationTimestamp="2025-05-09 00:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:18:37.323630074 +0000 UTC m=+41.354926024" watchObservedRunningTime="2025-05-09 00:18:37.324073647 +0000 UTC m=+41.355369597" May 9 00:18:37.350976 kubelet[2856]: I0509 00:18:37.350911 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xqhds" podStartSLOduration=26.350891608 podStartE2EDuration="26.350891608s" podCreationTimestamp="2025-05-09 00:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:18:37.35073822 +0000 UTC m=+41.382034180" watchObservedRunningTime="2025-05-09 00:18:37.350891608 +0000 UTC m=+41.382187558" May 9 00:18:37.843148 kubelet[2856]: I0509 00:18:37.842825 2856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:18:37.846685 kubelet[2856]: E0509 00:18:37.845019 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:38.311791 kubelet[2856]: E0509 00:18:38.311742 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:38.311962 kubelet[2856]: E0509 00:18:38.311933 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:38.312012 kubelet[2856]: E0509 00:18:38.311995 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:39.313350 kubelet[2856]: E0509 00:18:39.313269 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:18:41.983628 systemd[1]: Started sshd@11-10.0.0.125:22-10.0.0.1:35574.service - OpenSSH per-connection server daemon (10.0.0.1:35574). May 9 00:18:42.061231 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 35574 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:18:42.063105 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:42.067797 systemd-logind[1597]: New session 12 of user core. May 9 00:18:42.074667 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:18:42.229515 sshd[4324]: Connection closed by 10.0.0.1 port 35574 May 9 00:18:42.230374 sshd-session[4321]: pam_unix(sshd:session): session closed for user core May 9 00:18:42.240627 systemd[1]: Started sshd@12-10.0.0.125:22-10.0.0.1:35578.service - OpenSSH per-connection server daemon (10.0.0.1:35578). May 9 00:18:42.241147 systemd[1]: sshd@11-10.0.0.125:22-10.0.0.1:35574.service: Deactivated successfully. May 9 00:18:42.244647 systemd-logind[1597]: Session 12 logged out. Waiting for processes to exit. May 9 00:18:42.245626 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:18:42.248382 systemd-logind[1597]: Removed session 12. May 9 00:18:42.280178 sshd[4336]: Accepted publickey for core from 10.0.0.1 port 35578 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:18:42.282106 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:42.287107 systemd-logind[1597]: New session 13 of user core. May 9 00:18:42.298730 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:18:42.486887 sshd[4342]: Connection closed by 10.0.0.1 port 35578 May 9 00:18:42.487328 sshd-session[4336]: pam_unix(sshd:session): session closed for user core May 9 00:18:42.495591 systemd[1]: Started sshd@13-10.0.0.125:22-10.0.0.1:35584.service - OpenSSH per-connection server daemon (10.0.0.1:35584). May 9 00:18:42.496373 systemd[1]: sshd@12-10.0.0.125:22-10.0.0.1:35578.service: Deactivated successfully. May 9 00:18:42.500573 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:18:42.503478 systemd-logind[1597]: Session 13 logged out. Waiting for processes to exit. May 9 00:18:42.504841 systemd-logind[1597]: Removed session 13. May 9 00:18:42.538796 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 35584 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:18:42.540800 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:42.546365 systemd-logind[1597]: New session 14 of user core. May 9 00:18:42.560725 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:18:42.687678 sshd[4356]: Connection closed by 10.0.0.1 port 35584 May 9 00:18:42.688046 sshd-session[4351]: pam_unix(sshd:session): session closed for user core May 9 00:18:42.691872 systemd[1]: sshd@13-10.0.0.125:22-10.0.0.1:35584.service: Deactivated successfully. May 9 00:18:42.694222 systemd-logind[1597]: Session 14 logged out. Waiting for processes to exit. May 9 00:18:42.694315 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:18:42.695656 systemd-logind[1597]: Removed session 14. May 9 00:18:47.710854 systemd[1]: Started sshd@14-10.0.0.125:22-10.0.0.1:43982.service - OpenSSH per-connection server daemon (10.0.0.1:43982). May 9 00:18:47.748783 sshd[4370]: Accepted publickey for core from 10.0.0.1 port 43982 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:18:47.751081 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:47.755978 systemd-logind[1597]: New session 15 of user core. May 9 00:18:47.765606 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:18:47.881031 sshd[4373]: Connection closed by 10.0.0.1 port 43982 May 9 00:18:47.881606 sshd-session[4370]: pam_unix(sshd:session): session closed for user core May 9 00:18:47.886129 systemd[1]: sshd@14-10.0.0.125:22-10.0.0.1:43982.service: Deactivated successfully. May 9 00:18:47.888836 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:18:47.889632 systemd-logind[1597]: Session 15 logged out. Waiting for processes to exit. May 9 00:18:47.890600 systemd-logind[1597]: Removed session 15. May 9 00:18:52.896558 systemd[1]: Started sshd@15-10.0.0.125:22-10.0.0.1:43998.service - OpenSSH per-connection server daemon (10.0.0.1:43998). May 9 00:18:52.931615 sshd[4385]: Accepted publickey for core from 10.0.0.1 port 43998 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:18:52.933598 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:52.938441 systemd-logind[1597]: New session 16 of user core. May 9 00:18:52.946669 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:18:53.057360 sshd[4388]: Connection closed by 10.0.0.1 port 43998 May 9 00:18:53.057824 sshd-session[4385]: pam_unix(sshd:session): session closed for user core May 9 00:18:53.062469 systemd[1]: sshd@15-10.0.0.125:22-10.0.0.1:43998.service: Deactivated successfully. May 9 00:18:53.065190 systemd-logind[1597]: Session 16 logged out. Waiting for processes to exit. May 9 00:18:53.065326 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:18:53.066560 systemd-logind[1597]: Removed session 16. May 9 00:18:58.073854 systemd[1]: Started sshd@16-10.0.0.125:22-10.0.0.1:59264.service - OpenSSH per-connection server daemon (10.0.0.1:59264). May 9 00:18:58.112388 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 59264 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:18:58.114406 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:58.120731 systemd-logind[1597]: New session 17 of user core. May 9 00:18:58.129761 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:18:58.241260 sshd[4406]: Connection closed by 10.0.0.1 port 59264 May 9 00:18:58.241773 sshd-session[4403]: pam_unix(sshd:session): session closed for user core May 9 00:18:58.250835 systemd[1]: Started sshd@17-10.0.0.125:22-10.0.0.1:59268.service - OpenSSH per-connection server daemon (10.0.0.1:59268). May 9 00:18:58.251934 systemd[1]: sshd@16-10.0.0.125:22-10.0.0.1:59264.service: Deactivated successfully. May 9 00:18:58.256256 systemd-logind[1597]: Session 17 logged out. Waiting for processes to exit. May 9 00:18:58.257687 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:18:58.259191 systemd-logind[1597]: Removed session 17. May 9 00:18:58.292184 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 59268 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:18:58.294202 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:58.299118 systemd-logind[1597]: New session 18 of user core. May 9 00:18:58.310615 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:18:58.847779 sshd[4421]: Connection closed by 10.0.0.1 port 59268 May 9 00:18:58.848345 sshd-session[4415]: pam_unix(sshd:session): session closed for user core May 9 00:18:58.856611 systemd[1]: Started sshd@18-10.0.0.125:22-10.0.0.1:59274.service - OpenSSH per-connection server daemon (10.0.0.1:59274). May 9 00:18:58.857313 systemd[1]: sshd@17-10.0.0.125:22-10.0.0.1:59268.service: Deactivated successfully. May 9 00:18:58.860594 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:18:58.860668 systemd-logind[1597]: Session 18 logged out. Waiting for processes to exit. May 9 00:18:58.863037 systemd-logind[1597]: Removed session 18. May 9 00:18:58.903971 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 59274 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:18:58.906107 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:18:58.912923 systemd-logind[1597]: New session 19 of user core. May 9 00:18:58.919691 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 00:19:00.453135 sshd[4434]: Connection closed by 10.0.0.1 port 59274 May 9 00:19:00.455477 sshd-session[4428]: pam_unix(sshd:session): session closed for user core May 9 00:19:00.461584 systemd[1]: Started sshd@19-10.0.0.125:22-10.0.0.1:59280.service - OpenSSH per-connection server daemon (10.0.0.1:59280). May 9 00:19:00.462123 systemd[1]: sshd@18-10.0.0.125:22-10.0.0.1:59274.service: Deactivated successfully. May 9 00:19:00.468516 systemd[1]: session-19.scope: Deactivated successfully. May 9 00:19:00.469923 systemd-logind[1597]: Session 19 logged out. Waiting for processes to exit. May 9 00:19:00.472947 systemd-logind[1597]: Removed session 19. May 9 00:19:00.507508 sshd[4452]: Accepted publickey for core from 10.0.0.1 port 59280 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:19:00.510083 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:00.515345 systemd-logind[1597]: New session 20 of user core. May 9 00:19:00.525771 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 00:19:00.757969 sshd[4460]: Connection closed by 10.0.0.1 port 59280 May 9 00:19:00.758622 sshd-session[4452]: pam_unix(sshd:session): session closed for user core May 9 00:19:00.768870 systemd[1]: Started sshd@20-10.0.0.125:22-10.0.0.1:59286.service - OpenSSH per-connection server daemon (10.0.0.1:59286). May 9 00:19:00.769477 systemd[1]: sshd@19-10.0.0.125:22-10.0.0.1:59280.service: Deactivated successfully. May 9 00:19:00.772639 systemd[1]: session-20.scope: Deactivated successfully. May 9 00:19:00.774255 systemd-logind[1597]: Session 20 logged out. Waiting for processes to exit. May 9 00:19:00.775405 systemd-logind[1597]: Removed session 20. May 9 00:19:00.803094 sshd[4468]: Accepted publickey for core from 10.0.0.1 port 59286 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:19:00.805093 sshd-session[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:00.809735 systemd-logind[1597]: New session 21 of user core. May 9 00:19:00.818619 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 00:19:00.935553 sshd[4473]: Connection closed by 10.0.0.1 port 59286 May 9 00:19:00.935949 sshd-session[4468]: pam_unix(sshd:session): session closed for user core May 9 00:19:00.939962 systemd[1]: sshd@20-10.0.0.125:22-10.0.0.1:59286.service: Deactivated successfully. May 9 00:19:00.942750 systemd-logind[1597]: Session 21 logged out. Waiting for processes to exit. May 9 00:19:00.942858 systemd[1]: session-21.scope: Deactivated successfully. May 9 00:19:00.943791 systemd-logind[1597]: Removed session 21. May 9 00:19:05.951606 systemd[1]: Started sshd@21-10.0.0.125:22-10.0.0.1:59292.service - OpenSSH per-connection server daemon (10.0.0.1:59292). May 9 00:19:05.985511 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 59292 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:19:05.987409 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:05.992210 systemd-logind[1597]: New session 22 of user core. May 9 00:19:05.999575 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 00:19:06.118352 sshd[4489]: Connection closed by 10.0.0.1 port 59292 May 9 00:19:06.118769 sshd-session[4486]: pam_unix(sshd:session): session closed for user core May 9 00:19:06.123792 systemd[1]: sshd@21-10.0.0.125:22-10.0.0.1:59292.service: Deactivated successfully. May 9 00:19:06.126905 systemd[1]: session-22.scope: Deactivated successfully. May 9 00:19:06.127932 systemd-logind[1597]: Session 22 logged out. Waiting for processes to exit. May 9 00:19:06.129218 systemd-logind[1597]: Removed session 22. May 9 00:19:10.073119 kubelet[2856]: E0509 00:19:10.073046 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:11.128528 systemd[1]: Started sshd@22-10.0.0.125:22-10.0.0.1:44658.service - OpenSSH per-connection server daemon (10.0.0.1:44658). May 9 00:19:11.163357 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 44658 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:19:11.165230 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:11.169777 systemd-logind[1597]: New session 23 of user core. May 9 00:19:11.181806 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 00:19:11.298848 sshd[4508]: Connection closed by 10.0.0.1 port 44658 May 9 00:19:11.299348 sshd-session[4505]: pam_unix(sshd:session): session closed for user core May 9 00:19:11.304434 systemd[1]: sshd@22-10.0.0.125:22-10.0.0.1:44658.service: Deactivated successfully. May 9 00:19:11.307663 systemd[1]: session-23.scope: Deactivated successfully. May 9 00:19:11.308618 systemd-logind[1597]: Session 23 logged out. Waiting for processes to exit. May 9 00:19:11.309984 systemd-logind[1597]: Removed session 23. May 9 00:19:16.074049 kubelet[2856]: E0509 00:19:16.074001 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:16.310542 systemd[1]: Started sshd@23-10.0.0.125:22-10.0.0.1:44670.service - OpenSSH per-connection server daemon (10.0.0.1:44670). May 9 00:19:16.344986 sshd[4523]: Accepted publickey for core from 10.0.0.1 port 44670 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:19:16.347272 sshd-session[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:16.352885 systemd-logind[1597]: New session 24 of user core. May 9 00:19:16.364752 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 00:19:16.483237 sshd[4526]: Connection closed by 10.0.0.1 port 44670 May 9 00:19:16.483630 sshd-session[4523]: pam_unix(sshd:session): session closed for user core May 9 00:19:16.488004 systemd[1]: sshd@23-10.0.0.125:22-10.0.0.1:44670.service: Deactivated successfully. May 9 00:19:16.490515 systemd[1]: session-24.scope: Deactivated successfully. May 9 00:19:16.491352 systemd-logind[1597]: Session 24 logged out. Waiting for processes to exit. May 9 00:19:16.492342 systemd-logind[1597]: Removed session 24. May 9 00:19:21.498518 systemd[1]: Started sshd@24-10.0.0.125:22-10.0.0.1:39076.service - OpenSSH per-connection server daemon (10.0.0.1:39076). May 9 00:19:21.530783 sshd[4539]: Accepted publickey for core from 10.0.0.1 port 39076 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:19:21.532763 sshd-session[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:21.537278 systemd-logind[1597]: New session 25 of user core. May 9 00:19:21.556554 systemd[1]: Started session-25.scope - Session 25 of User core. May 9 00:19:21.685176 sshd[4542]: Connection closed by 10.0.0.1 port 39076 May 9 00:19:21.685620 sshd-session[4539]: pam_unix(sshd:session): session closed for user core May 9 00:19:21.689579 systemd[1]: sshd@24-10.0.0.125:22-10.0.0.1:39076.service: Deactivated successfully. May 9 00:19:21.691787 systemd-logind[1597]: Session 25 logged out. Waiting for processes to exit. May 9 00:19:21.691842 systemd[1]: session-25.scope: Deactivated successfully. May 9 00:19:21.693205 systemd-logind[1597]: Removed session 25. May 9 00:19:23.073817 kubelet[2856]: E0509 00:19:23.073777 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:26.696698 systemd[1]: Started sshd@25-10.0.0.125:22-10.0.0.1:39088.service - OpenSSH per-connection server daemon (10.0.0.1:39088). May 9 00:19:26.730489 sshd[4554]: Accepted publickey for core from 10.0.0.1 port 39088 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:19:26.732172 sshd-session[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:26.736512 systemd-logind[1597]: New session 26 of user core. May 9 00:19:26.746559 systemd[1]: Started session-26.scope - Session 26 of User core. May 9 00:19:26.861061 sshd[4557]: Connection closed by 10.0.0.1 port 39088 May 9 00:19:26.861655 sshd-session[4554]: pam_unix(sshd:session): session closed for user core May 9 00:19:26.870593 systemd[1]: Started sshd@26-10.0.0.125:22-10.0.0.1:49000.service - OpenSSH per-connection server daemon (10.0.0.1:49000). May 9 00:19:26.871194 systemd[1]: sshd@25-10.0.0.125:22-10.0.0.1:39088.service: Deactivated successfully. May 9 00:19:26.873446 systemd[1]: session-26.scope: Deactivated successfully. May 9 00:19:26.875411 systemd-logind[1597]: Session 26 logged out. Waiting for processes to exit. May 9 00:19:26.876489 systemd-logind[1597]: Removed session 26. May 9 00:19:26.912381 sshd[4568]: Accepted publickey for core from 10.0.0.1 port 49000 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:19:26.914383 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:26.918676 systemd-logind[1597]: New session 27 of user core. May 9 00:19:26.928617 systemd[1]: Started session-27.scope - Session 27 of User core. May 9 00:19:28.319842 containerd[1621]: time="2025-05-09T00:19:28.319767625Z" level=info msg="StopContainer for \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\" with timeout 30 (s)" May 9 00:19:28.322019 containerd[1621]: time="2025-05-09T00:19:28.321991688Z" level=info msg="Stop container \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\" with signal terminated" May 9 00:19:28.365424 containerd[1621]: time="2025-05-09T00:19:28.365354123Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:19:28.365754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7-rootfs.mount: Deactivated successfully. May 9 00:19:28.369721 containerd[1621]: time="2025-05-09T00:19:28.369663194Z" level=info msg="StopContainer for \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\" with timeout 2 (s)" May 9 00:19:28.370001 containerd[1621]: time="2025-05-09T00:19:28.369973914Z" level=info msg="Stop container \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\" with signal terminated" May 9 00:19:28.379257 systemd-networkd[1247]: lxc_health: Link DOWN May 9 00:19:28.379265 systemd-networkd[1247]: lxc_health: Lost carrier May 9 00:19:28.384792 containerd[1621]: time="2025-05-09T00:19:28.384714276Z" level=info msg="shim disconnected" id=d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7 namespace=k8s.io May 9 00:19:28.384792 containerd[1621]: time="2025-05-09T00:19:28.384788787Z" level=warning msg="cleaning up after shim disconnected" id=d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7 namespace=k8s.io May 9 00:19:28.384792 containerd[1621]: time="2025-05-09T00:19:28.384798616Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:28.408485 containerd[1621]: time="2025-05-09T00:19:28.408412653Z" level=info msg="StopContainer for \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\" returns successfully" May 9 00:19:28.414188 containerd[1621]: time="2025-05-09T00:19:28.413966656Z" level=info msg="StopPodSandbox for \"441ee725f709b9d936ca2db9c5ab74864cea258a0dc446def95359e98cd8a744\"" May 9 00:19:28.414188 containerd[1621]: time="2025-05-09T00:19:28.414032962Z" level=info msg="Container to stop \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:19:28.416658 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-441ee725f709b9d936ca2db9c5ab74864cea258a0dc446def95359e98cd8a744-shm.mount: Deactivated successfully. May 9 00:19:28.431825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f-rootfs.mount: Deactivated successfully. May 9 00:19:28.440972 containerd[1621]: time="2025-05-09T00:19:28.440810575Z" level=info msg="shim disconnected" id=3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f namespace=k8s.io May 9 00:19:28.440972 containerd[1621]: time="2025-05-09T00:19:28.440882602Z" level=warning msg="cleaning up after shim disconnected" id=3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f namespace=k8s.io May 9 00:19:28.440972 containerd[1621]: time="2025-05-09T00:19:28.440905044Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:28.454729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-441ee725f709b9d936ca2db9c5ab74864cea258a0dc446def95359e98cd8a744-rootfs.mount: Deactivated successfully. May 9 00:19:28.458027 containerd[1621]: time="2025-05-09T00:19:28.457870771Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:19:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:19:28.460107 containerd[1621]: time="2025-05-09T00:19:28.459838165Z" level=info msg="shim disconnected" id=441ee725f709b9d936ca2db9c5ab74864cea258a0dc446def95359e98cd8a744 namespace=k8s.io May 9 00:19:28.460107 containerd[1621]: time="2025-05-09T00:19:28.459916484Z" level=warning msg="cleaning up after shim disconnected" id=441ee725f709b9d936ca2db9c5ab74864cea258a0dc446def95359e98cd8a744 namespace=k8s.io May 9 00:19:28.460107 containerd[1621]: time="2025-05-09T00:19:28.459931292Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:28.464500 containerd[1621]: time="2025-05-09T00:19:28.464442155Z" level=info msg="StopContainer for \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\" returns successfully" May 9 00:19:28.465156 containerd[1621]: time="2025-05-09T00:19:28.465122177Z" level=info msg="StopPodSandbox for \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\"" May 9 00:19:28.465226 containerd[1621]: time="2025-05-09T00:19:28.465172763Z" level=info msg="Container to stop \"5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:19:28.465226 containerd[1621]: time="2025-05-09T00:19:28.465218800Z" level=info msg="Container to stop \"d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:19:28.465327 containerd[1621]: time="2025-05-09T00:19:28.465234169Z" level=info msg="Container to stop \"a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:19:28.465327 containerd[1621]: time="2025-05-09T00:19:28.465249068Z" level=info msg="Container to stop \"f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:19:28.465327 containerd[1621]: time="2025-05-09T00:19:28.465266160Z" level=info msg="Container to stop \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:19:28.479200 containerd[1621]: time="2025-05-09T00:19:28.478877638Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:19:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:19:28.481019 containerd[1621]: time="2025-05-09T00:19:28.480992513Z" level=info msg="TearDown network for sandbox \"441ee725f709b9d936ca2db9c5ab74864cea258a0dc446def95359e98cd8a744\" successfully" May 9 00:19:28.481019 containerd[1621]: time="2025-05-09T00:19:28.481017461Z" level=info msg="StopPodSandbox for \"441ee725f709b9d936ca2db9c5ab74864cea258a0dc446def95359e98cd8a744\" returns successfully" May 9 00:19:28.502581 containerd[1621]: time="2025-05-09T00:19:28.502492999Z" level=info msg="shim disconnected" id=ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1 namespace=k8s.io May 9 00:19:28.502581 containerd[1621]: time="2025-05-09T00:19:28.502569534Z" level=warning msg="cleaning up after shim disconnected" id=ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1 namespace=k8s.io May 9 00:19:28.502581 containerd[1621]: time="2025-05-09T00:19:28.502586266Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:28.518685 containerd[1621]: time="2025-05-09T00:19:28.518603893Z" level=info msg="TearDown network for sandbox \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\" successfully" May 9 00:19:28.518685 containerd[1621]: time="2025-05-09T00:19:28.518659328Z" level=info msg="StopPodSandbox for \"ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1\" returns successfully" May 9 00:19:28.657670 kubelet[2856]: I0509 00:19:28.657472 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cilium-config-path\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.657670 kubelet[2856]: I0509 00:19:28.657532 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-etc-cni-netd\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.657670 kubelet[2856]: I0509 00:19:28.657563 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cilium-run\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.658408 kubelet[2856]: I0509 00:19:28.657667 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:28.658408 kubelet[2856]: I0509 00:19:28.657667 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:28.658408 kubelet[2856]: I0509 00:19:28.657726 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cni-path\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.658408 kubelet[2856]: I0509 00:19:28.657753 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cni-path" (OuterVolumeSpecName: "cni-path") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:28.658408 kubelet[2856]: I0509 00:19:28.657754 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-hostproc\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.658635 kubelet[2856]: I0509 00:19:28.657791 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-host-proc-sys-kernel\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.658635 kubelet[2856]: I0509 00:19:28.657794 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-hostproc" (OuterVolumeSpecName: "hostproc") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:28.658635 kubelet[2856]: I0509 00:19:28.657826 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-host-proc-sys-net\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.658635 kubelet[2856]: I0509 00:19:28.657851 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1c5feda-33a4-43b1-803c-af7e33beef5d-cilium-config-path\") pod \"f1c5feda-33a4-43b1-803c-af7e33beef5d\" (UID: \"f1c5feda-33a4-43b1-803c-af7e33beef5d\") " May 9 00:19:28.658635 kubelet[2856]: I0509 00:19:28.657866 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:28.659015 kubelet[2856]: I0509 00:19:28.657876 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-clustermesh-secrets\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.659015 kubelet[2856]: I0509 00:19:28.657926 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-xtables-lock\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.659015 kubelet[2856]: I0509 00:19:28.657946 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-bpf-maps\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.659015 kubelet[2856]: I0509 00:19:28.657968 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5x7h\" (UniqueName: \"kubernetes.io/projected/f1c5feda-33a4-43b1-803c-af7e33beef5d-kube-api-access-x5x7h\") pod \"f1c5feda-33a4-43b1-803c-af7e33beef5d\" (UID: \"f1c5feda-33a4-43b1-803c-af7e33beef5d\") " May 9 00:19:28.659015 kubelet[2856]: I0509 00:19:28.657988 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-lib-modules\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.659015 kubelet[2856]: I0509 00:19:28.658011 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp4nt\" (UniqueName: \"kubernetes.io/projected/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-kube-api-access-fp4nt\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.659210 kubelet[2856]: I0509 00:19:28.658033 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-hubble-tls\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.659210 kubelet[2856]: I0509 00:19:28.658053 2856 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cilium-cgroup\") pod \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\" (UID: \"400348e2-f9bf-42cc-81a5-ec19aa5c53f7\") " May 9 00:19:28.659210 kubelet[2856]: I0509 00:19:28.658089 2856 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.659210 kubelet[2856]: I0509 00:19:28.658101 2856 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cilium-run\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.659210 kubelet[2856]: I0509 00:19:28.658112 2856 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cni-path\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.659210 kubelet[2856]: I0509 00:19:28.658123 2856 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-hostproc\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.659210 kubelet[2856]: I0509 00:19:28.658135 2856 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.660753 kubelet[2856]: I0509 00:19:28.657905 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:28.660753 kubelet[2856]: I0509 00:19:28.658165 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:28.660753 kubelet[2856]: I0509 00:19:28.659720 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:28.660753 kubelet[2856]: I0509 00:19:28.660498 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:28.660753 kubelet[2856]: I0509 00:19:28.660685 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:19:28.663949 kubelet[2856]: I0509 00:19:28.663870 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c5feda-33a4-43b1-803c-af7e33beef5d-kube-api-access-x5x7h" (OuterVolumeSpecName: "kube-api-access-x5x7h") pod "f1c5feda-33a4-43b1-803c-af7e33beef5d" (UID: "f1c5feda-33a4-43b1-803c-af7e33beef5d"). InnerVolumeSpecName "kube-api-access-x5x7h". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:19:28.664019 kubelet[2856]: I0509 00:19:28.663993 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 9 00:19:28.665599 kubelet[2856]: I0509 00:19:28.665398 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 00:19:28.665855 kubelet[2856]: I0509 00:19:28.665822 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-kube-api-access-fp4nt" (OuterVolumeSpecName: "kube-api-access-fp4nt") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "kube-api-access-fp4nt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:19:28.666326 kubelet[2856]: I0509 00:19:28.666258 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1c5feda-33a4-43b1-803c-af7e33beef5d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f1c5feda-33a4-43b1-803c-af7e33beef5d" (UID: "f1c5feda-33a4-43b1-803c-af7e33beef5d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 00:19:28.666465 kubelet[2856]: I0509 00:19:28.666391 2856 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "400348e2-f9bf-42cc-81a5-ec19aa5c53f7" (UID: "400348e2-f9bf-42cc-81a5-ec19aa5c53f7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:19:28.758864 kubelet[2856]: I0509 00:19:28.758804 2856 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.758864 kubelet[2856]: I0509 00:19:28.758845 2856 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.758864 kubelet[2856]: I0509 00:19:28.758854 2856 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.758864 kubelet[2856]: I0509 00:19:28.758863 2856 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1c5feda-33a4-43b1-803c-af7e33beef5d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.758864 kubelet[2856]: I0509 00:19:28.758871 2856 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.758864 kubelet[2856]: I0509 00:19:28.758879 2856 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-x5x7h\" (UniqueName: \"kubernetes.io/projected/f1c5feda-33a4-43b1-803c-af7e33beef5d-kube-api-access-x5x7h\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.759229 kubelet[2856]: I0509 00:19:28.758907 2856 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.759229 kubelet[2856]: I0509 00:19:28.758915 2856 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-lib-modules\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.759229 kubelet[2856]: I0509 00:19:28.758924 2856 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fp4nt\" (UniqueName: \"kubernetes.io/projected/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-kube-api-access-fp4nt\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.759229 kubelet[2856]: I0509 00:19:28.758931 2856 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 9 00:19:28.759229 kubelet[2856]: I0509 00:19:28.758939 2856 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/400348e2-f9bf-42cc-81a5-ec19aa5c53f7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 00:19:29.338002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1-rootfs.mount: Deactivated successfully. May 9 00:19:29.338237 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff7fd6d7d69ed240ab96adce585197318a3563096c0dcbf7feb54cd6fd80c3c1-shm.mount: Deactivated successfully. May 9 00:19:29.338459 systemd[1]: var-lib-kubelet-pods-f1c5feda\x2d33a4\x2d43b1\x2d803c\x2daf7e33beef5d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx5x7h.mount: Deactivated successfully. May 9 00:19:29.338645 systemd[1]: var-lib-kubelet-pods-400348e2\x2df9bf\x2d42cc\x2d81a5\x2dec19aa5c53f7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfp4nt.mount: Deactivated successfully. May 9 00:19:29.338816 systemd[1]: var-lib-kubelet-pods-400348e2\x2df9bf\x2d42cc\x2d81a5\x2dec19aa5c53f7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 00:19:29.338989 systemd[1]: var-lib-kubelet-pods-400348e2\x2df9bf\x2d42cc\x2d81a5\x2dec19aa5c53f7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 00:19:29.404963 kubelet[2856]: I0509 00:19:29.404736 2856 scope.go:117] "RemoveContainer" containerID="d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7" May 9 00:19:29.411864 containerd[1621]: time="2025-05-09T00:19:29.411815558Z" level=info msg="RemoveContainer for \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\"" May 9 00:19:29.706072 containerd[1621]: time="2025-05-09T00:19:29.706007015Z" level=info msg="RemoveContainer for \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\" returns successfully" May 9 00:19:29.706536 kubelet[2856]: I0509 00:19:29.706349 2856 scope.go:117] "RemoveContainer" containerID="d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7" May 9 00:19:29.708756 containerd[1621]: time="2025-05-09T00:19:29.708521888Z" level=error msg="ContainerStatus for \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\": not found" May 9 00:19:29.719484 kubelet[2856]: E0509 00:19:29.719419 2856 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\": not found" containerID="d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7" May 9 00:19:29.719631 kubelet[2856]: I0509 00:19:29.719471 2856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7"} err="failed to get container status \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7e51430aeec3590d77310294d9297d8c61eca27636d0bbe601153750b1988a7\": not found" May 9 00:19:29.719631 kubelet[2856]: I0509 00:19:29.719543 2856 scope.go:117] "RemoveContainer" containerID="3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f" May 9 00:19:29.721308 containerd[1621]: time="2025-05-09T00:19:29.721009627Z" level=info msg="RemoveContainer for \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\"" May 9 00:19:29.768847 containerd[1621]: time="2025-05-09T00:19:29.768785417Z" level=info msg="RemoveContainer for \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\" returns successfully" May 9 00:19:29.769951 kubelet[2856]: I0509 00:19:29.769897 2856 scope.go:117] "RemoveContainer" containerID="f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d" May 9 00:19:29.771236 containerd[1621]: time="2025-05-09T00:19:29.771189319Z" level=info msg="RemoveContainer for \"f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d\"" May 9 00:19:29.889467 containerd[1621]: time="2025-05-09T00:19:29.889405394Z" level=info msg="RemoveContainer for \"f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d\" returns successfully" May 9 00:19:29.889768 kubelet[2856]: I0509 00:19:29.889704 2856 scope.go:117] "RemoveContainer" containerID="a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786" May 9 00:19:29.890791 containerd[1621]: time="2025-05-09T00:19:29.890748824Z" level=info msg="RemoveContainer for \"a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786\"" May 9 00:19:30.023383 containerd[1621]: time="2025-05-09T00:19:30.023229278Z" level=info msg="RemoveContainer for \"a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786\" returns successfully" May 9 00:19:30.024411 kubelet[2856]: I0509 00:19:30.023550 2856 scope.go:117] "RemoveContainer" containerID="5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c" May 9 00:19:30.025573 containerd[1621]: time="2025-05-09T00:19:30.025531637Z" level=info msg="RemoveContainer for \"5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c\"" May 9 00:19:30.042539 containerd[1621]: time="2025-05-09T00:19:30.042453945Z" level=info msg="RemoveContainer for \"5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c\" returns successfully" May 9 00:19:30.042798 kubelet[2856]: I0509 00:19:30.042755 2856 scope.go:117] "RemoveContainer" containerID="d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4" May 9 00:19:30.044270 containerd[1621]: time="2025-05-09T00:19:30.044236226Z" level=info msg="RemoveContainer for \"d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4\"" May 9 00:19:30.049094 containerd[1621]: time="2025-05-09T00:19:30.049042646Z" level=info msg="RemoveContainer for \"d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4\" returns successfully" May 9 00:19:30.049347 kubelet[2856]: I0509 00:19:30.049313 2856 scope.go:117] "RemoveContainer" containerID="3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f" May 9 00:19:30.049572 containerd[1621]: time="2025-05-09T00:19:30.049532425Z" level=error msg="ContainerStatus for \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\": not found" May 9 00:19:30.049719 kubelet[2856]: E0509 00:19:30.049695 2856 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\": not found" containerID="3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f" May 9 00:19:30.049764 kubelet[2856]: I0509 00:19:30.049726 2856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f"} err="failed to get container status \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3601a2898790acf0b6d6a06cd399e7ad338e7f18e4bac729f0bf6c133789c78f\": not found" May 9 00:19:30.049764 kubelet[2856]: I0509 00:19:30.049754 2856 scope.go:117] "RemoveContainer" containerID="f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d" May 9 00:19:30.049961 containerd[1621]: time="2025-05-09T00:19:30.049929429Z" level=error msg="ContainerStatus for \"f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d\": not found" May 9 00:19:30.050078 kubelet[2856]: E0509 00:19:30.050052 2856 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d\": not found" containerID="f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d" May 9 00:19:30.050122 kubelet[2856]: I0509 00:19:30.050087 2856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d"} err="failed to get container status \"f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1e2ec99bfb00fc4785b059e178d583ce12e9ebf3aba5ac00d77f1988649635d\": not found" May 9 00:19:30.050122 kubelet[2856]: I0509 00:19:30.050116 2856 scope.go:117] "RemoveContainer" containerID="a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786" May 9 00:19:30.050326 containerd[1621]: time="2025-05-09T00:19:30.050294191Z" level=error msg="ContainerStatus for \"a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786\": not found" May 9 00:19:30.050420 kubelet[2856]: E0509 00:19:30.050394 2856 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786\": not found" containerID="a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786" May 9 00:19:30.050491 kubelet[2856]: I0509 00:19:30.050424 2856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786"} err="failed to get container status \"a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9cd46cd47d435cebb387c8d85b5d7ef4548703bc8cce0d670dc2ca5a9b96786\": not found" May 9 00:19:30.050491 kubelet[2856]: I0509 00:19:30.050442 2856 scope.go:117] "RemoveContainer" containerID="5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c" May 9 00:19:30.050625 containerd[1621]: time="2025-05-09T00:19:30.050592476Z" level=error msg="ContainerStatus for \"5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c\": not found" May 9 00:19:30.050736 kubelet[2856]: E0509 00:19:30.050713 2856 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c\": not found" containerID="5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c" May 9 00:19:30.050777 kubelet[2856]: I0509 00:19:30.050739 2856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c"} err="failed to get container status \"5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5401644d2ab64e3d694deda1d0747864e689f5aa256ffe334ce7749f96674f3c\": not found" May 9 00:19:30.050777 kubelet[2856]: I0509 00:19:30.050755 2856 scope.go:117] "RemoveContainer" containerID="d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4" May 9 00:19:30.050976 containerd[1621]: time="2025-05-09T00:19:30.050945706Z" level=error msg="ContainerStatus for \"d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4\": not found" May 9 00:19:30.051102 kubelet[2856]: E0509 00:19:30.051079 2856 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4\": not found" containerID="d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4" May 9 00:19:30.051156 kubelet[2856]: I0509 00:19:30.051105 2856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4"} err="failed to get container status \"d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4ce68bf2adebdbb23a3cf677a67c326ed588396bc1d324cd35f4958a90799c4\": not found" May 9 00:19:30.075646 kubelet[2856]: I0509 00:19:30.075597 2856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="400348e2-f9bf-42cc-81a5-ec19aa5c53f7" path="/var/lib/kubelet/pods/400348e2-f9bf-42cc-81a5-ec19aa5c53f7/volumes" May 9 00:19:30.076706 kubelet[2856]: I0509 00:19:30.076662 2856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c5feda-33a4-43b1-803c-af7e33beef5d" path="/var/lib/kubelet/pods/f1c5feda-33a4-43b1-803c-af7e33beef5d/volumes" May 9 00:19:30.303069 sshd[4573]: Connection closed by 10.0.0.1 port 49000 May 9 00:19:30.303465 sshd-session[4568]: pam_unix(sshd:session): session closed for user core May 9 00:19:30.311903 systemd[1]: Started sshd@27-10.0.0.125:22-10.0.0.1:49016.service - OpenSSH per-connection server daemon (10.0.0.1:49016). May 9 00:19:30.312670 systemd[1]: sshd@26-10.0.0.125:22-10.0.0.1:49000.service: Deactivated successfully. May 9 00:19:30.318768 systemd[1]: session-27.scope: Deactivated successfully. May 9 00:19:30.319752 systemd-logind[1597]: Session 27 logged out. Waiting for processes to exit. May 9 00:19:30.321531 systemd-logind[1597]: Removed session 27. May 9 00:19:30.355055 sshd[4736]: Accepted publickey for core from 10.0.0.1 port 49016 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:19:30.356907 sshd-session[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:30.362982 systemd-logind[1597]: New session 28 of user core. May 9 00:19:30.370859 systemd[1]: Started session-28.scope - Session 28 of User core. May 9 00:19:30.929349 sshd[4742]: Connection closed by 10.0.0.1 port 49016 May 9 00:19:30.930527 sshd-session[4736]: pam_unix(sshd:session): session closed for user core May 9 00:19:30.941431 systemd[1]: Started sshd@28-10.0.0.125:22-10.0.0.1:49022.service - OpenSSH per-connection server daemon (10.0.0.1:49022). May 9 00:19:30.943156 kubelet[2856]: I0509 00:19:30.942934 2856 topology_manager.go:215] "Topology Admit Handler" podUID="2e9b7b2f-a983-4d11-8245-07a874712979" podNamespace="kube-system" podName="cilium-q5cjz" May 9 00:19:30.943156 kubelet[2856]: E0509 00:19:30.943008 2856 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="400348e2-f9bf-42cc-81a5-ec19aa5c53f7" containerName="apply-sysctl-overwrites" May 9 00:19:30.943156 kubelet[2856]: E0509 00:19:30.943019 2856 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1c5feda-33a4-43b1-803c-af7e33beef5d" containerName="cilium-operator" May 9 00:19:30.943156 kubelet[2856]: E0509 00:19:30.943028 2856 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="400348e2-f9bf-42cc-81a5-ec19aa5c53f7" containerName="cilium-agent" May 9 00:19:30.943156 kubelet[2856]: E0509 00:19:30.943036 2856 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="400348e2-f9bf-42cc-81a5-ec19aa5c53f7" containerName="mount-cgroup" May 9 00:19:30.943156 kubelet[2856]: E0509 00:19:30.943045 2856 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="400348e2-f9bf-42cc-81a5-ec19aa5c53f7" containerName="mount-bpf-fs" May 9 00:19:30.943156 kubelet[2856]: E0509 00:19:30.943055 2856 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="400348e2-f9bf-42cc-81a5-ec19aa5c53f7" containerName="clean-cilium-state" May 9 00:19:30.943156 kubelet[2856]: I0509 00:19:30.943088 2856 memory_manager.go:354] "RemoveStaleState removing state" podUID="400348e2-f9bf-42cc-81a5-ec19aa5c53f7" containerName="cilium-agent" May 9 00:19:30.943156 kubelet[2856]: I0509 00:19:30.943096 2856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c5feda-33a4-43b1-803c-af7e33beef5d" containerName="cilium-operator" May 9 00:19:30.944082 systemd[1]: sshd@27-10.0.0.125:22-10.0.0.1:49016.service: Deactivated successfully. May 9 00:19:30.955457 systemd[1]: session-28.scope: Deactivated successfully. May 9 00:19:30.969565 systemd-logind[1597]: Session 28 logged out. Waiting for processes to exit. May 9 00:19:30.971676 systemd-logind[1597]: Removed session 28. May 9 00:19:31.024759 sshd[4751]: Accepted publickey for core from 10.0.0.1 port 49022 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:19:31.026950 sshd-session[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:31.033164 systemd-logind[1597]: New session 29 of user core. May 9 00:19:31.041062 systemd[1]: Started session-29.scope - Session 29 of User core. May 9 00:19:31.073364 kubelet[2856]: E0509 00:19:31.073319 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:31.076962 kubelet[2856]: I0509 00:19:31.076855 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e9b7b2f-a983-4d11-8245-07a874712979-hostproc\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.076962 kubelet[2856]: I0509 00:19:31.076923 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e9b7b2f-a983-4d11-8245-07a874712979-xtables-lock\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.076962 kubelet[2856]: I0509 00:19:31.076943 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e9b7b2f-a983-4d11-8245-07a874712979-cni-path\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.076962 kubelet[2856]: I0509 00:19:31.076961 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e9b7b2f-a983-4d11-8245-07a874712979-clustermesh-secrets\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.076962 kubelet[2856]: I0509 00:19:31.076980 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e9b7b2f-a983-4d11-8245-07a874712979-cilium-config-path\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.077267 kubelet[2856]: I0509 00:19:31.077027 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e9b7b2f-a983-4d11-8245-07a874712979-hubble-tls\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.077267 kubelet[2856]: I0509 00:19:31.077045 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx7rl\" (UniqueName: \"kubernetes.io/projected/2e9b7b2f-a983-4d11-8245-07a874712979-kube-api-access-dx7rl\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.077267 kubelet[2856]: I0509 00:19:31.077064 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e9b7b2f-a983-4d11-8245-07a874712979-host-proc-sys-net\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.077267 kubelet[2856]: I0509 00:19:31.077140 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e9b7b2f-a983-4d11-8245-07a874712979-lib-modules\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.077267 kubelet[2856]: I0509 00:19:31.077208 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2e9b7b2f-a983-4d11-8245-07a874712979-cilium-ipsec-secrets\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.077267 kubelet[2856]: I0509 00:19:31.077238 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e9b7b2f-a983-4d11-8245-07a874712979-bpf-maps\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.077504 kubelet[2856]: I0509 00:19:31.077261 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e9b7b2f-a983-4d11-8245-07a874712979-etc-cni-netd\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.077504 kubelet[2856]: I0509 00:19:31.077305 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e9b7b2f-a983-4d11-8245-07a874712979-host-proc-sys-kernel\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.077504 kubelet[2856]: I0509 00:19:31.077332 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e9b7b2f-a983-4d11-8245-07a874712979-cilium-run\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.077504 kubelet[2856]: I0509 00:19:31.077358 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e9b7b2f-a983-4d11-8245-07a874712979-cilium-cgroup\") pod \"cilium-q5cjz\" (UID: \"2e9b7b2f-a983-4d11-8245-07a874712979\") " pod="kube-system/cilium-q5cjz" May 9 00:19:31.095586 sshd[4758]: Connection closed by 10.0.0.1 port 49022 May 9 00:19:31.096075 sshd-session[4751]: pam_unix(sshd:session): session closed for user core May 9 00:19:31.100154 systemd[1]: sshd@28-10.0.0.125:22-10.0.0.1:49022.service: Deactivated successfully. May 9 00:19:31.104608 systemd[1]: session-29.scope: Deactivated successfully. May 9 00:19:31.107468 systemd-logind[1597]: Session 29 logged out. Waiting for processes to exit. May 9 00:19:31.113900 systemd[1]: Started sshd@29-10.0.0.125:22-10.0.0.1:49034.service - OpenSSH per-connection server daemon (10.0.0.1:49034). May 9 00:19:31.114986 systemd-logind[1597]: Removed session 29. May 9 00:19:31.153179 sshd[4764]: Accepted publickey for core from 10.0.0.1 port 49034 ssh2: RSA SHA256:85lUtH7dRt0vZCjdRSCi4rND4GQSZ2m5Ur7U+afuS5I May 9 00:19:31.155141 sshd-session[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:19:31.156896 kubelet[2856]: E0509 00:19:31.156829 2856 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:19:31.162460 systemd-logind[1597]: New session 30 of user core. May 9 00:19:31.165664 systemd[1]: Started session-30.scope - Session 30 of User core. May 9 00:19:31.255389 kubelet[2856]: E0509 00:19:31.254961 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:31.255590 containerd[1621]: time="2025-05-09T00:19:31.255523987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q5cjz,Uid:2e9b7b2f-a983-4d11-8245-07a874712979,Namespace:kube-system,Attempt:0,}" May 9 00:19:31.305738 containerd[1621]: time="2025-05-09T00:19:31.304700199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:19:31.305738 containerd[1621]: time="2025-05-09T00:19:31.305516146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:19:31.305738 containerd[1621]: time="2025-05-09T00:19:31.305533590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:19:31.305738 containerd[1621]: time="2025-05-09T00:19:31.305676541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:19:31.355920 containerd[1621]: time="2025-05-09T00:19:31.355836639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q5cjz,Uid:2e9b7b2f-a983-4d11-8245-07a874712979,Namespace:kube-system,Attempt:0,} returns sandbox id \"af9d32dd5c06498204ef1c75256a5187cc7d35159fa619839de4b28649721c53\"" May 9 00:19:31.356881 kubelet[2856]: E0509 00:19:31.356835 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:31.360367 containerd[1621]: time="2025-05-09T00:19:31.360196820Z" level=info msg="CreateContainer within sandbox \"af9d32dd5c06498204ef1c75256a5187cc7d35159fa619839de4b28649721c53\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:19:31.402322 containerd[1621]: time="2025-05-09T00:19:31.402206998Z" level=info msg="CreateContainer within sandbox \"af9d32dd5c06498204ef1c75256a5187cc7d35159fa619839de4b28649721c53\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7cceb2593f43c508e8c7b024d52e0c0568eb6405ff5d62704d0a92db195741fb\"" May 9 00:19:31.402961 containerd[1621]: time="2025-05-09T00:19:31.402912636Z" level=info msg="StartContainer for \"7cceb2593f43c508e8c7b024d52e0c0568eb6405ff5d62704d0a92db195741fb\"" May 9 00:19:31.463478 containerd[1621]: time="2025-05-09T00:19:31.463416485Z" level=info msg="StartContainer for \"7cceb2593f43c508e8c7b024d52e0c0568eb6405ff5d62704d0a92db195741fb\" returns successfully" May 9 00:19:31.515379 containerd[1621]: time="2025-05-09T00:19:31.515197057Z" level=info msg="shim disconnected" id=7cceb2593f43c508e8c7b024d52e0c0568eb6405ff5d62704d0a92db195741fb namespace=k8s.io May 9 00:19:31.515379 containerd[1621]: time="2025-05-09T00:19:31.515257672Z" level=warning msg="cleaning up after shim disconnected" id=7cceb2593f43c508e8c7b024d52e0c0568eb6405ff5d62704d0a92db195741fb namespace=k8s.io May 9 00:19:31.515379 containerd[1621]: time="2025-05-09T00:19:31.515266319Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:32.419875 kubelet[2856]: E0509 00:19:32.419798 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:32.421824 containerd[1621]: time="2025-05-09T00:19:32.421686404Z" level=info msg="CreateContainer within sandbox \"af9d32dd5c06498204ef1c75256a5187cc7d35159fa619839de4b28649721c53\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:19:32.445774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1404992143.mount: Deactivated successfully. May 9 00:19:32.446385 containerd[1621]: time="2025-05-09T00:19:32.446339207Z" level=info msg="CreateContainer within sandbox \"af9d32dd5c06498204ef1c75256a5187cc7d35159fa619839de4b28649721c53\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3e224b4cc52f9eabee31bd3e2366517dfc487f344e246bbcc599637ecf64cb82\"" May 9 00:19:32.446909 containerd[1621]: time="2025-05-09T00:19:32.446855957Z" level=info msg="StartContainer for \"3e224b4cc52f9eabee31bd3e2366517dfc487f344e246bbcc599637ecf64cb82\"" May 9 00:19:32.524326 containerd[1621]: time="2025-05-09T00:19:32.524253386Z" level=info msg="StartContainer for \"3e224b4cc52f9eabee31bd3e2366517dfc487f344e246bbcc599637ecf64cb82\" returns successfully" May 9 00:19:32.600969 containerd[1621]: time="2025-05-09T00:19:32.600897637Z" level=info msg="shim disconnected" id=3e224b4cc52f9eabee31bd3e2366517dfc487f344e246bbcc599637ecf64cb82 namespace=k8s.io May 9 00:19:32.600969 containerd[1621]: time="2025-05-09T00:19:32.600956148Z" level=warning msg="cleaning up after shim disconnected" id=3e224b4cc52f9eabee31bd3e2366517dfc487f344e246bbcc599637ecf64cb82 namespace=k8s.io May 9 00:19:32.600969 containerd[1621]: time="2025-05-09T00:19:32.600964694Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:33.185867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e224b4cc52f9eabee31bd3e2366517dfc487f344e246bbcc599637ecf64cb82-rootfs.mount: Deactivated successfully. May 9 00:19:33.423919 kubelet[2856]: E0509 00:19:33.423731 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:33.427362 containerd[1621]: time="2025-05-09T00:19:33.426197628Z" level=info msg="CreateContainer within sandbox \"af9d32dd5c06498204ef1c75256a5187cc7d35159fa619839de4b28649721c53\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:19:33.456451 containerd[1621]: time="2025-05-09T00:19:33.456240018Z" level=info msg="CreateContainer within sandbox \"af9d32dd5c06498204ef1c75256a5187cc7d35159fa619839de4b28649721c53\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"01e9bc351bc06034e0f3319b3034d79a28b6bde29fc4be94da6ec02714d59706\"" May 9 00:19:33.461898 containerd[1621]: time="2025-05-09T00:19:33.461811120Z" level=info msg="StartContainer for \"01e9bc351bc06034e0f3319b3034d79a28b6bde29fc4be94da6ec02714d59706\"" May 9 00:19:33.548507 containerd[1621]: time="2025-05-09T00:19:33.548457231Z" level=info msg="StartContainer for \"01e9bc351bc06034e0f3319b3034d79a28b6bde29fc4be94da6ec02714d59706\" returns successfully" May 9 00:19:33.589886 containerd[1621]: time="2025-05-09T00:19:33.589796942Z" level=info msg="shim disconnected" id=01e9bc351bc06034e0f3319b3034d79a28b6bde29fc4be94da6ec02714d59706 namespace=k8s.io May 9 00:19:33.589886 containerd[1621]: time="2025-05-09T00:19:33.589875099Z" level=warning msg="cleaning up after shim disconnected" id=01e9bc351bc06034e0f3319b3034d79a28b6bde29fc4be94da6ec02714d59706 namespace=k8s.io May 9 00:19:33.589886 containerd[1621]: time="2025-05-09T00:19:33.589886522Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:34.185611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01e9bc351bc06034e0f3319b3034d79a28b6bde29fc4be94da6ec02714d59706-rootfs.mount: Deactivated successfully. May 9 00:19:34.428221 kubelet[2856]: E0509 00:19:34.428182 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:34.431875 containerd[1621]: time="2025-05-09T00:19:34.431787720Z" level=info msg="CreateContainer within sandbox \"af9d32dd5c06498204ef1c75256a5187cc7d35159fa619839de4b28649721c53\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:19:34.587042 containerd[1621]: time="2025-05-09T00:19:34.586838089Z" level=info msg="CreateContainer within sandbox \"af9d32dd5c06498204ef1c75256a5187cc7d35159fa619839de4b28649721c53\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8cb4822d75781381025fc618bf32a9bc66da1573d21a4bfc59a5e5b3eb2f9762\"" May 9 00:19:34.588570 containerd[1621]: time="2025-05-09T00:19:34.587555710Z" level=info msg="StartContainer for \"8cb4822d75781381025fc618bf32a9bc66da1573d21a4bfc59a5e5b3eb2f9762\"" May 9 00:19:34.768810 containerd[1621]: time="2025-05-09T00:19:34.768740548Z" level=info msg="StartContainer for \"8cb4822d75781381025fc618bf32a9bc66da1573d21a4bfc59a5e5b3eb2f9762\" returns successfully" May 9 00:19:34.950350 containerd[1621]: time="2025-05-09T00:19:34.950174570Z" level=info msg="shim disconnected" id=8cb4822d75781381025fc618bf32a9bc66da1573d21a4bfc59a5e5b3eb2f9762 namespace=k8s.io May 9 00:19:34.950350 containerd[1621]: time="2025-05-09T00:19:34.950243931Z" level=warning msg="cleaning up after shim disconnected" id=8cb4822d75781381025fc618bf32a9bc66da1573d21a4bfc59a5e5b3eb2f9762 namespace=k8s.io May 9 00:19:34.950350 containerd[1621]: time="2025-05-09T00:19:34.950254702Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:19:35.186575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cb4822d75781381025fc618bf32a9bc66da1573d21a4bfc59a5e5b3eb2f9762-rootfs.mount: Deactivated successfully. May 9 00:19:35.435422 kubelet[2856]: E0509 00:19:35.435384 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:35.440242 containerd[1621]: time="2025-05-09T00:19:35.440198736Z" level=info msg="CreateContainer within sandbox \"af9d32dd5c06498204ef1c75256a5187cc7d35159fa619839de4b28649721c53\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:19:35.494576 containerd[1621]: time="2025-05-09T00:19:35.494490889Z" level=info msg="CreateContainer within sandbox \"af9d32dd5c06498204ef1c75256a5187cc7d35159fa619839de4b28649721c53\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bcb91dec08ca6c2ddddcc0ec4a4e6c327ff631b1911f04a92c7913b754e9a5e4\"" May 9 00:19:35.495269 containerd[1621]: time="2025-05-09T00:19:35.495189774Z" level=info msg="StartContainer for \"bcb91dec08ca6c2ddddcc0ec4a4e6c327ff631b1911f04a92c7913b754e9a5e4\"" May 9 00:19:35.580756 containerd[1621]: time="2025-05-09T00:19:35.580663801Z" level=info msg="StartContainer for \"bcb91dec08ca6c2ddddcc0ec4a4e6c327ff631b1911f04a92c7913b754e9a5e4\" returns successfully" May 9 00:19:36.155324 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 9 00:19:36.439859 kubelet[2856]: E0509 00:19:36.439814 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:37.441867 kubelet[2856]: E0509 00:19:37.441791 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:38.445324 kubelet[2856]: E0509 00:19:38.444947 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:39.897884 systemd-networkd[1247]: lxc_health: Link UP May 9 00:19:39.906112 systemd-networkd[1247]: lxc_health: Gained carrier May 9 00:19:41.143805 systemd-networkd[1247]: lxc_health: Gained IPv6LL May 9 00:19:41.258006 kubelet[2856]: E0509 00:19:41.257957 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:41.282312 kubelet[2856]: I0509 00:19:41.280882 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q5cjz" podStartSLOduration=11.280859544 podStartE2EDuration="11.280859544s" podCreationTimestamp="2025-05-09 00:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:19:36.739567207 +0000 UTC m=+100.770863167" watchObservedRunningTime="2025-05-09 00:19:41.280859544 +0000 UTC m=+105.312155494" May 9 00:19:41.450994 kubelet[2856]: E0509 00:19:41.450877 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:42.073325 kubelet[2856]: E0509 00:19:42.072891 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:42.452313 kubelet[2856]: E0509 00:19:42.452264 2856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:19:46.117834 sshd[4768]: Connection closed by 10.0.0.1 port 49034 May 9 00:19:46.118460 sshd-session[4764]: pam_unix(sshd:session): session closed for user core May 9 00:19:46.123445 systemd[1]: sshd@29-10.0.0.125:22-10.0.0.1:49034.service: Deactivated successfully. May 9 00:19:46.126380 systemd[1]: session-30.scope: Deactivated successfully. May 9 00:19:46.126430 systemd-logind[1597]: Session 30 logged out. Waiting for processes to exit. May 9 00:19:46.128026 systemd-logind[1597]: Removed session 30.