May 15 23:56:04.246337 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:19:35 -00 2025 May 15 23:56:04.246367 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 15 23:56:04.246383 kernel: BIOS-provided physical RAM map: May 15 23:56:04.246392 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 23:56:04.246400 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 23:56:04.246408 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 23:56:04.246418 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 15 23:56:04.246428 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 23:56:04.246436 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 15 23:56:04.246445 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 15 23:56:04.246457 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 15 23:56:04.246466 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 15 23:56:04.246475 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 15 23:56:04.246485 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 15 23:56:04.246496 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 15 23:56:04.246507 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 23:56:04.246521 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 15 23:56:04.246530 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 15 23:56:04.246539 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 15 23:56:04.246548 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 15 23:56:04.246558 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 15 23:56:04.246567 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 23:56:04.246577 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 15 23:56:04.246587 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 23:56:04.246596 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 15 23:56:04.246606 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 23:56:04.246614 kernel: NX (Execute Disable) protection: active May 15 23:56:04.246628 kernel: APIC: Static calls initialized May 15 23:56:04.246638 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 15 23:56:04.246648 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 15 23:56:04.246658 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 15 23:56:04.246667 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 15 23:56:04.246676 kernel: extended physical RAM map: May 15 23:56:04.246686 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 23:56:04.246697 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 23:56:04.246707 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 23:56:04.246717 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 15 23:56:04.246726 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 23:56:04.246738 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 15 23:56:04.246748 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 15 23:56:04.246764 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 15 23:56:04.246775 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 15 23:56:04.246786 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 15 23:56:04.246805 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 15 23:56:04.246816 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 15 23:56:04.246831 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 15 23:56:04.246842 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 15 23:56:04.246852 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 15 23:56:04.246879 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 15 23:56:04.246889 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 23:56:04.246900 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 15 23:56:04.246910 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 15 23:56:04.246920 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 15 23:56:04.246930 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 15 23:56:04.246944 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 15 23:56:04.246955 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 23:56:04.246965 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 15 23:56:04.246975 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 23:56:04.246985 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 15 23:56:04.246995 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 23:56:04.247005 kernel: efi: EFI v2.7 by EDK II May 15 23:56:04.247015 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 15 23:56:04.247026 kernel: random: crng init done May 15 23:56:04.247036 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 15 23:56:04.247046 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 15 23:56:04.247061 kernel: secureboot: Secure boot disabled May 15 23:56:04.247071 kernel: SMBIOS 2.8 present. May 15 23:56:04.247081 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 15 23:56:04.247091 kernel: Hypervisor detected: KVM May 15 23:56:04.247101 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 23:56:04.247111 kernel: kvm-clock: using sched offset of 3155481572 cycles May 15 23:56:04.247122 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 23:56:04.247133 kernel: tsc: Detected 2794.748 MHz processor May 15 23:56:04.247143 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 23:56:04.247154 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 23:56:04.247165 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 15 23:56:04.247180 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 15 23:56:04.247190 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 23:56:04.247200 kernel: Using GB pages for direct mapping May 15 23:56:04.247210 kernel: ACPI: Early table checksum verification disabled May 15 23:56:04.247220 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 15 23:56:04.247230 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 15 23:56:04.247241 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:56:04.247251 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:56:04.247261 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 15 23:56:04.247276 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:56:04.247287 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:56:04.247297 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:56:04.247307 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:56:04.247318 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 15 23:56:04.247328 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 15 23:56:04.247339 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 15 23:56:04.247349 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 15 23:56:04.247363 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 15 23:56:04.247374 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 15 23:56:04.247384 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 15 23:56:04.247394 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 15 23:56:04.247405 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 15 23:56:04.247415 kernel: No NUMA configuration found May 15 23:56:04.247425 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 15 23:56:04.247436 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 15 23:56:04.247446 kernel: Zone ranges: May 15 23:56:04.247456 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 23:56:04.247471 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 15 23:56:04.247481 kernel: Normal empty May 15 23:56:04.247492 kernel: Movable zone start for each node May 15 23:56:04.247502 kernel: Early memory node ranges May 15 23:56:04.247513 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 15 23:56:04.247523 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 15 23:56:04.247533 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 15 23:56:04.247543 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 15 23:56:04.247553 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 15 23:56:04.247567 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 15 23:56:04.247577 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 15 23:56:04.247587 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 15 23:56:04.247598 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 15 23:56:04.247608 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 23:56:04.247618 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 15 23:56:04.247640 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 15 23:56:04.247654 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 23:56:04.247665 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 15 23:56:04.247676 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 15 23:56:04.247687 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 15 23:56:04.247698 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 15 23:56:04.247713 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 15 23:56:04.247724 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 23:56:04.247735 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 23:56:04.247747 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 23:56:04.247759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 23:56:04.247773 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 23:56:04.247784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 23:56:04.247795 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 23:56:04.247824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 23:56:04.247835 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 23:56:04.247846 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 23:56:04.247857 kernel: TSC deadline timer available May 15 23:56:04.247899 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 15 23:56:04.247911 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 23:56:04.247926 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 23:56:04.247936 kernel: kvm-guest: setup PV sched yield May 15 23:56:04.247948 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 15 23:56:04.247959 kernel: Booting paravirtualized kernel on KVM May 15 23:56:04.247970 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 23:56:04.247981 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 15 23:56:04.247992 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 15 23:56:04.248003 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 15 23:56:04.248014 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 23:56:04.248024 kernel: kvm-guest: PV spinlocks enabled May 15 23:56:04.248039 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 23:56:04.248068 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 15 23:56:04.248080 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 23:56:04.248101 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 23:56:04.248130 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 23:56:04.248159 kernel: Fallback order for Node 0: 0 May 15 23:56:04.248179 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 15 23:56:04.248215 kernel: Policy zone: DMA32 May 15 23:56:04.248242 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 23:56:04.248271 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2295K rwdata, 22752K rodata, 42988K init, 2204K bss, 175776K reserved, 0K cma-reserved) May 15 23:56:04.248281 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 23:56:04.248292 kernel: ftrace: allocating 37950 entries in 149 pages May 15 23:56:04.248304 kernel: ftrace: allocated 149 pages with 4 groups May 15 23:56:04.248315 kernel: Dynamic Preempt: voluntary May 15 23:56:04.248326 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 23:56:04.248338 kernel: rcu: RCU event tracing is enabled. May 15 23:56:04.248350 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 23:56:04.248365 kernel: Trampoline variant of Tasks RCU enabled. May 15 23:56:04.248377 kernel: Rude variant of Tasks RCU enabled. May 15 23:56:04.248387 kernel: Tracing variant of Tasks RCU enabled. May 15 23:56:04.248398 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 23:56:04.248409 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 23:56:04.248420 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 23:56:04.248431 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 23:56:04.248443 kernel: Console: colour dummy device 80x25 May 15 23:56:04.248453 kernel: printk: console [ttyS0] enabled May 15 23:56:04.248468 kernel: ACPI: Core revision 20230628 May 15 23:56:04.248480 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 23:56:04.248490 kernel: APIC: Switch to symmetric I/O mode setup May 15 23:56:04.248501 kernel: x2apic enabled May 15 23:56:04.248511 kernel: APIC: Switched APIC routing to: physical x2apic May 15 23:56:04.248523 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 23:56:04.248534 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 23:56:04.248544 kernel: kvm-guest: setup PV IPIs May 15 23:56:04.248555 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 23:56:04.248570 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 23:56:04.248581 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 15 23:56:04.248592 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 23:56:04.248604 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 23:56:04.248615 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 23:56:04.248626 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 23:56:04.248637 kernel: Spectre V2 : Mitigation: Retpolines May 15 23:56:04.248648 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 23:56:04.248659 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 23:56:04.248675 kernel: RETBleed: Mitigation: untrained return thunk May 15 23:56:04.248685 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 23:56:04.248696 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 23:56:04.248708 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 23:56:04.248721 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 23:56:04.248732 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 23:56:04.248743 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 23:56:04.248754 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 23:56:04.248769 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 23:56:04.248780 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 23:56:04.248792 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 15 23:56:04.248817 kernel: Freeing SMP alternatives memory: 32K May 15 23:56:04.248828 kernel: pid_max: default: 32768 minimum: 301 May 15 23:56:04.248840 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 23:56:04.248851 kernel: landlock: Up and running. May 15 23:56:04.248887 kernel: SELinux: Initializing. May 15 23:56:04.248899 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:56:04.248916 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:56:04.248927 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 23:56:04.248937 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:56:04.248948 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:56:04.248959 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:56:04.248970 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 23:56:04.248981 kernel: ... version: 0 May 15 23:56:04.248992 kernel: ... bit width: 48 May 15 23:56:04.249003 kernel: ... generic registers: 6 May 15 23:56:04.249019 kernel: ... value mask: 0000ffffffffffff May 15 23:56:04.249030 kernel: ... max period: 00007fffffffffff May 15 23:56:04.249039 kernel: ... fixed-purpose events: 0 May 15 23:56:04.249050 kernel: ... event mask: 000000000000003f May 15 23:56:04.249061 kernel: signal: max sigframe size: 1776 May 15 23:56:04.249073 kernel: rcu: Hierarchical SRCU implementation. May 15 23:56:04.249084 kernel: rcu: Max phase no-delay instances is 400. May 15 23:56:04.249096 kernel: smp: Bringing up secondary CPUs ... May 15 23:56:04.249107 kernel: smpboot: x86: Booting SMP configuration: May 15 23:56:04.249122 kernel: .... node #0, CPUs: #1 #2 #3 May 15 23:56:04.249133 kernel: smp: Brought up 1 node, 4 CPUs May 15 23:56:04.249145 kernel: smpboot: Max logical packages: 1 May 15 23:56:04.249155 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 15 23:56:04.249167 kernel: devtmpfs: initialized May 15 23:56:04.249179 kernel: x86/mm: Memory block size: 128MB May 15 23:56:04.249190 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 15 23:56:04.249201 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 15 23:56:04.249211 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 15 23:56:04.249225 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 15 23:56:04.249236 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 15 23:56:04.249247 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 15 23:56:04.249258 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 23:56:04.249269 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 23:56:04.249280 kernel: pinctrl core: initialized pinctrl subsystem May 15 23:56:04.249290 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 23:56:04.249301 kernel: audit: initializing netlink subsys (disabled) May 15 23:56:04.249313 kernel: audit: type=2000 audit(1747353363.807:1): state=initialized audit_enabled=0 res=1 May 15 23:56:04.249326 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 23:56:04.249336 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 23:56:04.249346 kernel: cpuidle: using governor menu May 15 23:56:04.249355 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 23:56:04.249365 kernel: dca service started, version 1.12.1 May 15 23:56:04.249374 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 15 23:56:04.249384 kernel: PCI: Using configuration type 1 for base access May 15 23:56:04.249395 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 23:56:04.249405 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 23:56:04.249419 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 23:56:04.249430 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 23:56:04.249441 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 23:56:04.249452 kernel: ACPI: Added _OSI(Module Device) May 15 23:56:04.249463 kernel: ACPI: Added _OSI(Processor Device) May 15 23:56:04.249480 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 23:56:04.249491 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 23:56:04.249505 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 23:56:04.249516 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 15 23:56:04.249538 kernel: ACPI: Interpreter enabled May 15 23:56:04.249564 kernel: ACPI: PM: (supports S0 S3 S5) May 15 23:56:04.249591 kernel: ACPI: Using IOAPIC for interrupt routing May 15 23:56:04.249618 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 23:56:04.249649 kernel: PCI: Using E820 reservations for host bridge windows May 15 23:56:04.249664 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 23:56:04.249675 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 23:56:04.249988 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 23:56:04.250248 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 23:56:04.250417 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 23:56:04.250434 kernel: PCI host bridge to bus 0000:00 May 15 23:56:04.250603 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 23:56:04.250754 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 23:56:04.250939 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 23:56:04.251092 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 15 23:56:04.251250 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 15 23:56:04.251401 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 15 23:56:04.251554 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 23:56:04.251757 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 23:56:04.252096 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 23:56:04.252276 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 15 23:56:04.252451 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 15 23:56:04.252617 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 15 23:56:04.252783 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 15 23:56:04.253018 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 23:56:04.253206 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 15 23:56:04.253374 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 15 23:56:04.253548 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 15 23:56:04.253721 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 15 23:56:04.253928 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 15 23:56:04.254099 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 15 23:56:04.254264 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 15 23:56:04.254431 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 15 23:56:04.254593 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 23:56:04.254757 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 15 23:56:04.254960 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 15 23:56:04.255128 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 15 23:56:04.255293 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 15 23:56:04.255470 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 23:56:04.255639 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 23:56:04.255829 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 23:56:04.256019 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 15 23:56:04.256196 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 15 23:56:04.256372 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 23:56:04.256532 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 15 23:56:04.256549 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 23:56:04.256560 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 23:56:04.256571 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 23:56:04.256582 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 23:56:04.256597 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 23:56:04.256608 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 23:56:04.256618 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 23:56:04.256629 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 23:56:04.256640 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 23:56:04.256650 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 23:56:04.256661 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 23:56:04.256672 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 23:56:04.256683 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 23:56:04.256698 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 23:56:04.256709 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 23:56:04.256720 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 23:56:04.256731 kernel: iommu: Default domain type: Translated May 15 23:56:04.256742 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 23:56:04.256754 kernel: efivars: Registered efivars operations May 15 23:56:04.256765 kernel: PCI: Using ACPI for IRQ routing May 15 23:56:04.256776 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 23:56:04.256788 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 15 23:56:04.256828 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 15 23:56:04.256839 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 15 23:56:04.256848 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 15 23:56:04.256859 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 15 23:56:04.256885 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 15 23:56:04.256897 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 15 23:56:04.256908 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 15 23:56:04.257075 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 23:56:04.257236 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 23:56:04.257405 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 23:56:04.257422 kernel: vgaarb: loaded May 15 23:56:04.257433 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 23:56:04.257445 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 23:56:04.257456 kernel: clocksource: Switched to clocksource kvm-clock May 15 23:56:04.257466 kernel: VFS: Disk quotas dquot_6.6.0 May 15 23:56:04.257477 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 23:56:04.257488 kernel: pnp: PnP ACPI init May 15 23:56:04.257651 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 15 23:56:04.257673 kernel: pnp: PnP ACPI: found 6 devices May 15 23:56:04.257684 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 23:56:04.257695 kernel: NET: Registered PF_INET protocol family May 15 23:56:04.257706 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 23:56:04.257743 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 23:56:04.257757 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 23:56:04.257769 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 23:56:04.257780 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 23:56:04.257806 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 23:56:04.257818 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:56:04.257830 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:56:04.257842 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 23:56:04.257853 kernel: NET: Registered PF_XDP protocol family May 15 23:56:04.258035 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 15 23:56:04.258199 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 15 23:56:04.258355 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 23:56:04.258511 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 23:56:04.258659 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 23:56:04.258816 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 15 23:56:04.258983 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 15 23:56:04.259133 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 15 23:56:04.259150 kernel: PCI: CLS 0 bytes, default 64 May 15 23:56:04.259161 kernel: Initialise system trusted keyrings May 15 23:56:04.259173 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 23:56:04.259189 kernel: Key type asymmetric registered May 15 23:56:04.259201 kernel: Asymmetric key parser 'x509' registered May 15 23:56:04.259212 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 15 23:56:04.259224 kernel: io scheduler mq-deadline registered May 15 23:56:04.259236 kernel: io scheduler kyber registered May 15 23:56:04.259247 kernel: io scheduler bfq registered May 15 23:56:04.259258 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 23:56:04.259270 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 23:56:04.259282 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 23:56:04.259298 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 23:56:04.259312 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 23:56:04.259324 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 23:56:04.259335 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 23:56:04.259347 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 23:56:04.259359 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 23:56:04.259375 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 23:56:04.259555 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 23:56:04.259713 kernel: rtc_cmos 00:04: registered as rtc0 May 15 23:56:04.259889 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T23:56:03 UTC (1747353363) May 15 23:56:04.260046 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 15 23:56:04.260064 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 23:56:04.260076 kernel: efifb: probing for efifb May 15 23:56:04.260088 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 15 23:56:04.260104 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 15 23:56:04.260115 kernel: efifb: scrolling: redraw May 15 23:56:04.260127 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 15 23:56:04.260138 kernel: Console: switching to colour frame buffer device 160x50 May 15 23:56:04.260149 kernel: fb0: EFI VGA frame buffer device May 15 23:56:04.260160 kernel: pstore: Using crash dump compression: deflate May 15 23:56:04.260171 kernel: pstore: Registered efi_pstore as persistent store backend May 15 23:56:04.260182 kernel: NET: Registered PF_INET6 protocol family May 15 23:56:04.260193 kernel: Segment Routing with IPv6 May 15 23:56:04.260210 kernel: In-situ OAM (IOAM) with IPv6 May 15 23:56:04.260221 kernel: NET: Registered PF_PACKET protocol family May 15 23:56:04.260232 kernel: Key type dns_resolver registered May 15 23:56:04.260243 kernel: IPI shorthand broadcast: enabled May 15 23:56:04.260254 kernel: sched_clock: Marking stable (961003831, 440877404)->(1478863625, -76982390) May 15 23:56:04.260266 kernel: registered taskstats version 1 May 15 23:56:04.260277 kernel: Loading compiled-in X.509 certificates May 15 23:56:04.260288 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 563478d245b598189519397611f5bddee97f3fc1' May 15 23:56:04.260299 kernel: Key type .fscrypt registered May 15 23:56:04.260314 kernel: Key type fscrypt-provisioning registered May 15 23:56:04.260326 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 23:56:04.260337 kernel: ima: Allocated hash algorithm: sha1 May 15 23:56:04.260348 kernel: ima: No architecture policies found May 15 23:56:04.260360 kernel: clk: Disabling unused clocks May 15 23:56:04.260372 kernel: Freeing unused kernel image (initmem) memory: 42988K May 15 23:56:04.260384 kernel: Write protecting the kernel read-only data: 36864k May 15 23:56:04.260395 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 15 23:56:04.260407 kernel: Run /init as init process May 15 23:56:04.260423 kernel: with arguments: May 15 23:56:04.260433 kernel: /init May 15 23:56:04.260445 kernel: with environment: May 15 23:56:04.260456 kernel: HOME=/ May 15 23:56:04.260467 kernel: TERM=linux May 15 23:56:04.260478 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 23:56:04.260492 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 23:56:04.260507 systemd[1]: Detected virtualization kvm. May 15 23:56:04.260523 systemd[1]: Detected architecture x86-64. May 15 23:56:04.260534 systemd[1]: Running in initrd. May 15 23:56:04.260546 systemd[1]: No hostname configured, using default hostname. May 15 23:56:04.260557 systemd[1]: Hostname set to . May 15 23:56:04.260570 systemd[1]: Initializing machine ID from VM UUID. May 15 23:56:04.260582 systemd[1]: Queued start job for default target initrd.target. May 15 23:56:04.260594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:56:04.260606 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:56:04.260623 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 23:56:04.260635 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:56:04.260648 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 23:56:04.260660 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 23:56:04.260676 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 23:56:04.260689 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 23:56:04.260706 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:56:04.260718 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:56:04.260729 systemd[1]: Reached target paths.target - Path Units. May 15 23:56:04.260741 systemd[1]: Reached target slices.target - Slice Units. May 15 23:56:04.260752 systemd[1]: Reached target swap.target - Swaps. May 15 23:56:04.260764 systemd[1]: Reached target timers.target - Timer Units. May 15 23:56:04.260776 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:56:04.260787 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:56:04.260808 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 23:56:04.260824 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 23:56:04.260836 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:56:04.260848 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:56:04.260861 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:56:04.260887 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:56:04.260899 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 23:56:04.260911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:56:04.260923 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 23:56:04.260939 systemd[1]: Starting systemd-fsck-usr.service... May 15 23:56:04.260951 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:56:04.260963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:56:04.260975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:56:04.260987 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 23:56:04.260999 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:56:04.261011 systemd[1]: Finished systemd-fsck-usr.service. May 15 23:56:04.261059 systemd-journald[193]: Collecting audit messages is disabled. May 15 23:56:04.261088 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:56:04.261105 systemd-journald[193]: Journal started May 15 23:56:04.261132 systemd-journald[193]: Runtime Journal (/run/log/journal/50b8dd44bcb245c38ab64a212c57d70f) is 6.0M, max 48.3M, 42.2M free. May 15 23:56:04.250281 systemd-modules-load[194]: Inserted module 'overlay' May 15 23:56:04.279144 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:56:04.280290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:56:04.302260 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:56:04.310893 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 23:56:04.314158 kernel: Bridge firewalling registered May 15 23:56:04.314173 systemd-modules-load[194]: Inserted module 'br_netfilter' May 15 23:56:04.318223 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:56:04.328722 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:56:04.333545 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:56:04.334564 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:56:04.339572 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:56:04.350987 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:56:04.356177 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:56:04.357427 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:56:04.365114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:56:04.374640 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:56:04.388276 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 23:56:04.401085 dracut-cmdline[233]: dracut-dracut-053 May 15 23:56:04.405324 systemd-resolved[224]: Positive Trust Anchors: May 15 23:56:04.405337 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:56:04.405376 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:56:04.408228 systemd-resolved[224]: Defaulting to hostname 'linux'. May 15 23:56:04.409314 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:56:04.410152 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:56:04.434607 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 15 23:56:04.564002 kernel: SCSI subsystem initialized May 15 23:56:04.597223 kernel: Loading iSCSI transport class v2.0-870. May 15 23:56:04.611914 kernel: iscsi: registered transport (tcp) May 15 23:56:04.645184 kernel: iscsi: registered transport (qla4xxx) May 15 23:56:04.645279 kernel: QLogic iSCSI HBA Driver May 15 23:56:04.739637 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 23:56:04.752345 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 23:56:04.798499 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 23:56:04.798649 kernel: device-mapper: uevent: version 1.0.3 May 15 23:56:04.798681 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 23:56:04.857955 kernel: raid6: avx2x4 gen() 19293 MB/s May 15 23:56:04.874955 kernel: raid6: avx2x2 gen() 18952 MB/s May 15 23:56:04.893120 kernel: raid6: avx2x1 gen() 15394 MB/s May 15 23:56:04.893199 kernel: raid6: using algorithm avx2x4 gen() 19293 MB/s May 15 23:56:04.912341 kernel: raid6: .... xor() 4872 MB/s, rmw enabled May 15 23:56:04.912442 kernel: raid6: using avx2x2 recovery algorithm May 15 23:56:04.941927 kernel: xor: automatically using best checksumming function avx May 15 23:56:05.204925 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 23:56:05.224693 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 23:56:05.234251 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:56:05.250711 systemd-udevd[416]: Using default interface naming scheme 'v255'. May 15 23:56:05.255498 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:56:05.258659 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 23:56:05.279438 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation May 15 23:56:05.322894 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:56:05.335288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:56:05.404757 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:56:05.430247 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 23:56:05.456169 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 15 23:56:05.458637 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 23:56:05.462913 kernel: cryptd: max_cpu_qlen set to 1000 May 15 23:56:05.463512 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:56:05.467565 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:56:05.490315 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:56:05.501970 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 23:56:05.507926 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 23:56:05.507982 kernel: GPT:9289727 != 19775487 May 15 23:56:05.507997 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 23:56:05.508008 kernel: GPT:9289727 != 19775487 May 15 23:56:05.508018 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 23:56:05.508028 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:56:05.505092 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 23:56:05.517990 kernel: AVX2 version of gcm_enc/dec engaged. May 15 23:56:05.518014 kernel: AES CTR mode by8 optimization enabled May 15 23:56:05.518630 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:56:05.518804 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:56:05.523406 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:56:05.524700 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:56:05.524982 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:56:05.527164 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:56:05.541898 kernel: BTRFS: device fsid da1480a3-a7d8-4e12-bbe1-1257540eb9ae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (470) May 15 23:56:05.542952 kernel: libata version 3.00 loaded. May 15 23:56:05.545885 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (476) May 15 23:56:05.547502 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:56:05.552252 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 23:56:05.556931 kernel: ahci 0000:00:1f.2: version 3.0 May 15 23:56:05.560903 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 23:56:05.560949 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 23:56:05.561167 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 23:56:05.562961 kernel: scsi host0: ahci May 15 23:56:05.564079 kernel: scsi host1: ahci May 15 23:56:05.566924 kernel: scsi host2: ahci May 15 23:56:05.570251 kernel: scsi host3: ahci May 15 23:56:05.570475 kernel: scsi host4: ahci May 15 23:56:05.576971 kernel: scsi host5: ahci May 15 23:56:05.577296 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 15 23:56:05.579233 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 15 23:56:05.579258 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 15 23:56:05.580360 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 15 23:56:05.580535 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 23:56:05.587002 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 15 23:56:05.587039 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 15 23:56:05.590388 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 23:56:05.596831 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 23:56:05.597315 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 23:56:05.622184 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:56:05.639154 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 23:56:05.640298 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:56:05.640384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:56:05.640666 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:56:05.642117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:56:05.673966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:56:05.688176 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:56:05.712687 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:56:05.839844 disk-uuid[564]: Primary Header is updated. May 15 23:56:05.839844 disk-uuid[564]: Secondary Entries is updated. May 15 23:56:05.839844 disk-uuid[564]: Secondary Header is updated. May 15 23:56:05.845306 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:56:05.894264 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 23:56:05.894332 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 23:56:05.895294 kernel: ata3.00: applying bridge limits May 15 23:56:05.896532 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 23:56:05.898455 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 23:56:05.901944 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 23:56:05.914921 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 23:56:05.915001 kernel: ata3.00: configured for UDMA/100 May 15 23:56:05.917535 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 23:56:05.920895 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 23:56:06.012041 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 23:56:06.012446 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 23:56:06.026952 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 23:56:06.869906 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:56:06.869999 disk-uuid[577]: The operation has completed successfully. May 15 23:56:06.912365 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 23:56:06.912541 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 23:56:06.951236 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 23:56:06.969234 sh[600]: Success May 15 23:56:06.980895 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 23:56:07.046244 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 23:56:07.049053 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 23:56:07.054999 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 23:56:07.072888 kernel: BTRFS info (device dm-0): first mount of filesystem da1480a3-a7d8-4e12-bbe1-1257540eb9ae May 15 23:56:07.072968 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 23:56:07.073001 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 23:56:07.075628 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 23:56:07.075727 kernel: BTRFS info (device dm-0): using free space tree May 15 23:56:07.107609 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 23:56:07.110656 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 23:56:07.124292 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 23:56:07.129008 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 23:56:07.141789 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:56:07.141886 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 23:56:07.141904 kernel: BTRFS info (device vda6): using free space tree May 15 23:56:07.156313 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:56:07.173642 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 23:56:07.177211 kernel: BTRFS info (device vda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:56:07.217914 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 23:56:07.228166 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 23:56:07.305282 ignition[723]: Ignition 2.20.0 May 15 23:56:07.305295 ignition[723]: Stage: fetch-offline May 15 23:56:07.305342 ignition[723]: no configs at "/usr/lib/ignition/base.d" May 15 23:56:07.305352 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:56:07.305465 ignition[723]: parsed url from cmdline: "" May 15 23:56:07.305470 ignition[723]: no config URL provided May 15 23:56:07.305475 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" May 15 23:56:07.305485 ignition[723]: no config at "/usr/lib/ignition/user.ign" May 15 23:56:07.313292 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:56:07.305523 ignition[723]: op(1): [started] loading QEMU firmware config module May 15 23:56:07.305536 ignition[723]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 23:56:07.318031 ignition[723]: op(1): [finished] loading QEMU firmware config module May 15 23:56:07.322238 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:56:07.349345 systemd-networkd[788]: lo: Link UP May 15 23:56:07.349358 systemd-networkd[788]: lo: Gained carrier May 15 23:56:07.351446 systemd-networkd[788]: Enumeration completed May 15 23:56:07.351987 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:56:07.351992 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:56:07.352997 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:56:07.354916 systemd-networkd[788]: eth0: Link UP May 15 23:56:07.354920 systemd-networkd[788]: eth0: Gained carrier May 15 23:56:07.354930 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:56:07.355481 systemd[1]: Reached target network.target - Network. May 15 23:56:07.373018 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:56:07.383444 ignition[723]: parsing config with SHA512: fede6b60a0971cae5dda0db190e605d9f1a1f83f62835b667404ffe0660c9bd51f037c9640eb32dcc370dcffd7b973a7a6db9ede0a19c6b1413ca5bbe3103574 May 15 23:56:07.390490 systemd-resolved[224]: Detected conflict on linux IN A 10.0.0.111 May 15 23:56:07.390505 systemd-resolved[224]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. May 15 23:56:07.391515 ignition[723]: fetch-offline: fetch-offline passed May 15 23:56:07.390779 unknown[723]: fetched base config from "system" May 15 23:56:07.391620 ignition[723]: Ignition finished successfully May 15 23:56:07.390797 unknown[723]: fetched user config from "qemu" May 15 23:56:07.394223 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:56:07.396573 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 23:56:07.405176 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 23:56:07.421990 ignition[792]: Ignition 2.20.0 May 15 23:56:07.422005 ignition[792]: Stage: kargs May 15 23:56:07.422230 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 15 23:56:07.422247 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:56:07.423510 ignition[792]: kargs: kargs passed May 15 23:56:07.423569 ignition[792]: Ignition finished successfully May 15 23:56:07.427187 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 23:56:07.441204 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 23:56:07.454950 ignition[801]: Ignition 2.20.0 May 15 23:56:07.454962 ignition[801]: Stage: disks May 15 23:56:07.455135 ignition[801]: no configs at "/usr/lib/ignition/base.d" May 15 23:56:07.455146 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:56:07.458788 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 23:56:07.456285 ignition[801]: disks: disks passed May 15 23:56:07.460413 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 23:56:07.456341 ignition[801]: Ignition finished successfully May 15 23:56:07.462494 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 23:56:07.464526 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:56:07.466886 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:56:07.468223 systemd[1]: Reached target basic.target - Basic System. May 15 23:56:07.480163 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 23:56:07.495849 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 23:56:07.503553 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 23:56:07.524096 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 23:56:07.625922 kernel: EXT4-fs (vda9): mounted filesystem 13a141f5-2ff0-46d9-bee3-974c86536128 r/w with ordered data mode. Quota mode: none. May 15 23:56:07.626655 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 23:56:07.628346 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 23:56:07.642053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:56:07.644553 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 23:56:07.645548 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 23:56:07.645588 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 23:56:07.654770 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (820) May 15 23:56:07.654803 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:56:07.645612 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:56:07.662741 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 23:56:07.662770 kernel: BTRFS info (device vda6): using free space tree May 15 23:56:07.662790 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:56:07.654006 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 23:56:07.661475 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 23:56:07.663801 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:56:07.723000 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory May 15 23:56:07.731478 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory May 15 23:56:07.737835 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory May 15 23:56:07.743168 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory May 15 23:56:07.862036 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 23:56:07.868044 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 23:56:07.869311 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 23:56:07.883891 kernel: BTRFS info (device vda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:56:07.897007 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 23:56:07.907290 ignition[935]: INFO : Ignition 2.20.0 May 15 23:56:07.907290 ignition[935]: INFO : Stage: mount May 15 23:56:07.909472 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:56:07.909472 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:56:07.912950 ignition[935]: INFO : mount: mount passed May 15 23:56:07.913926 ignition[935]: INFO : Ignition finished successfully May 15 23:56:07.917342 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 23:56:07.926198 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 23:56:08.071900 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 23:56:08.081216 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:56:08.090741 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (948) May 15 23:56:08.090815 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:56:08.090831 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 23:56:08.091642 kernel: BTRFS info (device vda6): using free space tree May 15 23:56:08.095914 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:56:08.097810 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:56:08.124101 ignition[965]: INFO : Ignition 2.20.0 May 15 23:56:08.124101 ignition[965]: INFO : Stage: files May 15 23:56:08.126221 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:56:08.126221 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:56:08.126221 ignition[965]: DEBUG : files: compiled without relabeling support, skipping May 15 23:56:08.130351 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 23:56:08.130351 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 23:56:08.130351 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 23:56:08.130351 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 23:56:08.138481 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 23:56:08.138481 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 23:56:08.138481 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 23:56:08.138481 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 23:56:08.138481 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 23:56:08.130723 unknown[965]: wrote ssh authorized keys file for user: core May 15 23:56:08.226030 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 23:56:08.409033 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 23:56:08.409033 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:56:08.413387 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 23:56:08.447105 systemd-networkd[788]: eth0: Gained IPv6LL May 15 23:56:08.904565 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 15 23:56:09.084409 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 15 23:56:09.087120 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 15 23:56:09.696363 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 15 23:56:10.399416 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 15 23:56:10.399416 ignition[965]: INFO : files: op(d): [started] processing unit "containerd.service" May 15 23:56:10.403502 ignition[965]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 23:56:10.406202 ignition[965]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 23:56:10.406202 ignition[965]: INFO : files: op(d): [finished] processing unit "containerd.service" May 15 23:56:10.406202 ignition[965]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 15 23:56:10.411652 ignition[965]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:56:10.413547 ignition[965]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:56:10.413547 ignition[965]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 15 23:56:10.413547 ignition[965]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 15 23:56:10.413547 ignition[965]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:56:10.424679 ignition[965]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:56:10.424679 ignition[965]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 15 23:56:10.428644 ignition[965]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 15 23:56:10.470969 ignition[965]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:56:10.520785 ignition[965]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:56:10.520785 ignition[965]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 15 23:56:10.520785 ignition[965]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" May 15 23:56:10.520785 ignition[965]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" May 15 23:56:10.520785 ignition[965]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 23:56:10.520785 ignition[965]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 23:56:10.520785 ignition[965]: INFO : files: files passed May 15 23:56:10.520785 ignition[965]: INFO : Ignition finished successfully May 15 23:56:10.524121 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 23:56:10.555224 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 23:56:10.562203 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 23:56:10.567366 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 23:56:10.569662 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 23:56:10.577375 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory May 15 23:56:10.600293 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:56:10.600293 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 23:56:10.605041 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:56:10.609527 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:56:10.611735 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 23:56:10.621233 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 23:56:10.659283 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 23:56:10.659450 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 23:56:10.664661 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 23:56:10.666326 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 23:56:10.669758 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 23:56:10.683316 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 23:56:10.707482 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:56:10.719116 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 23:56:10.739216 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 23:56:10.740303 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:56:10.743003 systemd[1]: Stopped target timers.target - Timer Units. May 15 23:56:10.752746 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 23:56:10.753036 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:56:10.756857 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 23:56:10.759250 systemd[1]: Stopped target basic.target - Basic System. May 15 23:56:10.759848 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 23:56:10.760424 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:56:10.760826 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 23:56:10.761408 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 23:56:10.761803 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:56:10.762405 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 23:56:10.762800 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 23:56:10.763368 systemd[1]: Stopped target swap.target - Swaps. May 15 23:56:10.763734 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 23:56:10.763944 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 23:56:10.789809 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 23:56:10.790445 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:56:10.801142 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 23:56:10.802213 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:56:10.804697 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 23:56:10.805097 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 23:56:10.812321 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 23:56:10.812582 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:56:10.822946 systemd[1]: Stopped target paths.target - Path Units. May 15 23:56:10.825899 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 23:56:10.828655 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:56:10.830373 systemd[1]: Stopped target slices.target - Slice Units. May 15 23:56:10.830773 systemd[1]: Stopped target sockets.target - Socket Units. May 15 23:56:10.836786 systemd[1]: iscsid.socket: Deactivated successfully. May 15 23:56:10.836919 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:56:10.837499 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 23:56:10.837587 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:56:10.841702 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 23:56:10.841887 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:56:10.845597 systemd[1]: ignition-files.service: Deactivated successfully. May 15 23:56:10.845785 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 23:56:10.902228 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 23:56:10.904599 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 23:56:10.905785 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 23:56:10.905974 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:56:10.907932 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 23:56:10.908254 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:56:10.914431 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 23:56:10.930920 ignition[1020]: INFO : Ignition 2.20.0 May 15 23:56:10.930920 ignition[1020]: INFO : Stage: umount May 15 23:56:10.930920 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:56:10.930920 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:56:10.930920 ignition[1020]: INFO : umount: umount passed May 15 23:56:10.930920 ignition[1020]: INFO : Ignition finished successfully May 15 23:56:10.914569 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 23:56:10.926180 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 23:56:10.926349 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 23:56:10.929075 systemd[1]: Stopped target network.target - Network. May 15 23:56:10.930947 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 23:56:10.931030 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 23:56:10.933068 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 23:56:10.933127 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 23:56:10.934843 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 23:56:10.934917 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 23:56:10.937044 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 23:56:10.937107 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 23:56:10.940174 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 23:56:10.942473 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 23:56:10.946103 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 23:56:10.947964 systemd-networkd[788]: eth0: DHCPv6 lease lost May 15 23:56:10.951485 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 23:56:10.951686 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 23:56:10.954693 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 23:56:10.954835 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 23:56:10.960815 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 23:56:10.960919 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 23:56:11.003180 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 23:56:11.004327 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 23:56:11.004437 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:56:11.007040 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:56:11.007113 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:56:11.009241 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 23:56:11.009310 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 23:56:11.011530 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 23:56:11.011596 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:56:11.014138 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:56:11.047144 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 23:56:11.047384 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:56:11.049467 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 23:56:11.049618 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 23:56:11.054559 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 23:56:11.054679 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 23:56:11.055841 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 23:56:11.055991 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:56:11.058796 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 23:56:11.058899 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 23:56:11.061722 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 23:56:11.061778 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 23:56:11.062598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:56:11.062664 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:56:11.076223 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 23:56:11.076827 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 23:56:11.076939 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:56:11.080678 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 23:56:11.080787 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:56:11.081400 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 23:56:11.081478 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:56:11.081824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:56:11.081909 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:56:11.083009 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 23:56:11.083173 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 23:56:11.091507 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 23:56:11.091693 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 23:56:11.094472 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 23:56:11.098473 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 23:56:11.098584 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 23:56:11.100308 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 23:56:11.116484 systemd[1]: Switching root. May 15 23:56:11.154583 systemd-journald[193]: Journal stopped May 15 23:56:13.611187 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 15 23:56:13.611277 kernel: SELinux: policy capability network_peer_controls=1 May 15 23:56:13.611295 kernel: SELinux: policy capability open_perms=1 May 15 23:56:13.611312 kernel: SELinux: policy capability extended_socket_class=1 May 15 23:56:13.611327 kernel: SELinux: policy capability always_check_network=0 May 15 23:56:13.611343 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 23:56:13.611359 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 23:56:13.611383 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 23:56:13.611403 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 23:56:13.611422 kernel: audit: type=1403 audit(1747353372.645:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 23:56:13.611439 systemd[1]: Successfully loaded SELinux policy in 44.914ms. May 15 23:56:13.611473 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.037ms. May 15 23:56:13.611491 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 23:56:13.611510 systemd[1]: Detected virtualization kvm. May 15 23:56:13.611526 systemd[1]: Detected architecture x86-64. May 15 23:56:13.611561 systemd[1]: Detected first boot. May 15 23:56:13.611581 systemd[1]: Initializing machine ID from VM UUID. May 15 23:56:13.611598 zram_generator::config[1081]: No configuration found. May 15 23:56:13.611616 systemd[1]: Populated /etc with preset unit settings. May 15 23:56:13.611633 systemd[1]: Queued start job for default target multi-user.target. May 15 23:56:13.611649 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 23:56:13.611667 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 23:56:13.611684 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 23:56:13.611700 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 23:56:13.611717 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 23:56:13.611737 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 23:56:13.611755 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 23:56:13.611771 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 23:56:13.611786 systemd[1]: Created slice user.slice - User and Session Slice. May 15 23:56:13.611803 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:56:13.611821 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:56:13.611837 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 23:56:13.611854 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 23:56:13.611895 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 23:56:13.611913 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:56:13.611931 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 23:56:13.611947 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:56:13.611964 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 23:56:13.611980 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:56:13.611996 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:56:13.612014 systemd[1]: Reached target slices.target - Slice Units. May 15 23:56:13.612074 systemd[1]: Reached target swap.target - Swaps. May 15 23:56:13.612092 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 23:56:13.612109 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 23:56:13.612127 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 23:56:13.612144 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 23:56:13.612162 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:56:13.612179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:56:13.612197 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:56:13.612214 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 23:56:13.612231 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 23:56:13.612254 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 23:56:13.612270 systemd[1]: Mounting media.mount - External Media Directory... May 15 23:56:13.612286 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:56:13.612302 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 23:56:13.612326 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 23:56:13.612342 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 23:56:13.612358 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 23:56:13.612374 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:56:13.612394 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:56:13.612410 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 23:56:13.612425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:56:13.612442 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:56:13.612459 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:56:13.612476 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 23:56:13.612493 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:56:13.612510 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 23:56:13.612544 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 15 23:56:13.612566 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 15 23:56:13.612582 kernel: fuse: init (API version 7.39) May 15 23:56:13.612594 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:56:13.612606 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:56:13.612620 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 23:56:13.612632 kernel: ACPI: bus type drm_connector registered May 15 23:56:13.612643 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 23:56:13.612655 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:56:13.612671 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:56:13.612708 systemd-journald[1166]: Collecting audit messages is disabled. May 15 23:56:13.612729 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 23:56:13.612741 systemd-journald[1166]: Journal started May 15 23:56:13.612762 systemd-journald[1166]: Runtime Journal (/run/log/journal/50b8dd44bcb245c38ab64a212c57d70f) is 6.0M, max 48.3M, 42.2M free. May 15 23:56:13.615910 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:56:13.618856 kernel: loop: module loaded May 15 23:56:13.618359 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 23:56:13.619685 systemd[1]: Mounted media.mount - External Media Directory. May 15 23:56:13.621151 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 23:56:13.622649 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 23:56:13.624360 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 23:56:13.626041 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:56:13.627962 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 23:56:13.628246 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 23:56:13.630347 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 23:56:13.633093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:56:13.633381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:56:13.635236 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:56:13.635518 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:56:13.638009 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:56:13.638295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:56:13.640252 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 23:56:13.640552 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 23:56:13.642248 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:56:13.642490 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:56:13.644958 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:56:13.646940 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 23:56:13.649503 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 23:56:13.663990 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 23:56:13.683104 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 23:56:13.687272 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 23:56:13.688791 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 23:56:13.695061 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 23:56:13.700041 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 23:56:13.702340 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:56:13.706386 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 23:56:13.708027 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:56:13.716159 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:56:13.723000 systemd-journald[1166]: Time spent on flushing to /var/log/journal/50b8dd44bcb245c38ab64a212c57d70f is 26.147ms for 1036 entries. May 15 23:56:13.723000 systemd-journald[1166]: System Journal (/var/log/journal/50b8dd44bcb245c38ab64a212c57d70f) is 8.0M, max 195.6M, 187.6M free. May 15 23:56:13.785831 systemd-journald[1166]: Received client request to flush runtime journal. May 15 23:56:13.720470 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:56:13.726030 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 23:56:13.728916 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:56:13.735364 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 23:56:13.738468 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 23:56:13.745269 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 23:56:13.758128 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 23:56:13.767635 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 15 23:56:13.767654 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 15 23:56:13.772031 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:56:13.778622 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:56:13.852330 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 23:56:13.871466 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 23:56:13.873273 udevadm[1227]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 23:56:13.924056 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 23:56:13.938195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:56:13.960066 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. May 15 23:56:13.960095 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. May 15 23:56:13.968627 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:56:14.985139 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 23:56:14.999358 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:56:15.039837 systemd-udevd[1246]: Using default interface naming scheme 'v255'. May 15 23:56:15.064522 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:56:15.076094 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:56:15.090187 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 23:56:15.099099 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 15 23:56:15.110902 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1254) May 15 23:56:15.222066 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 23:56:15.243893 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 23:56:15.250625 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:56:15.253898 kernel: ACPI: button: Power Button [PWRF] May 15 23:56:15.268401 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 15 23:56:15.268807 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 23:56:15.269031 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 23:56:15.269233 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 23:56:15.336491 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 23:56:15.349804 systemd-networkd[1252]: lo: Link UP May 15 23:56:15.352086 systemd-networkd[1252]: lo: Gained carrier May 15 23:56:15.355643 systemd-networkd[1252]: Enumeration completed May 15 23:56:15.360672 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:56:15.360684 systemd-networkd[1252]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:56:15.361841 systemd-networkd[1252]: eth0: Link UP May 15 23:56:15.361847 systemd-networkd[1252]: eth0: Gained carrier May 15 23:56:15.361895 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:56:15.451899 kernel: mousedev: PS/2 mouse device common for all mice May 15 23:56:15.453411 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:56:15.457166 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:56:15.458819 systemd-networkd[1252]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:56:15.470143 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 23:56:15.474939 kernel: kvm_amd: TSC scaling supported May 15 23:56:15.474998 kernel: kvm_amd: Nested Virtualization enabled May 15 23:56:15.475589 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:56:15.476307 kernel: kvm_amd: Nested Paging enabled May 15 23:56:15.476352 kernel: kvm_amd: LBR virtualization supported May 15 23:56:15.476287 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:56:15.478896 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 15 23:56:15.478986 kernel: kvm_amd: Virtual GIF supported May 15 23:56:15.502179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:56:15.515182 kernel: EDAC MC: Ver: 3.0.0 May 15 23:56:15.556378 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 23:56:15.569792 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 23:56:15.572040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:56:15.581929 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:56:15.623949 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 23:56:15.626109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:56:15.639216 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 23:56:15.646230 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:56:15.684597 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 23:56:15.686225 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 23:56:15.687691 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 23:56:15.687728 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:56:15.689005 systemd[1]: Reached target machines.target - Containers. May 15 23:56:15.691299 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 23:56:15.703312 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 23:56:15.706931 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 23:56:15.708295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:56:15.709541 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 23:56:15.712682 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 23:56:15.717163 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 23:56:15.719753 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 23:56:15.730906 kernel: loop0: detected capacity change from 0 to 221472 May 15 23:56:15.735184 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 23:56:15.749326 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 23:56:15.753268 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 23:56:15.759899 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 23:56:15.785909 kernel: loop1: detected capacity change from 0 to 138184 May 15 23:56:15.826108 kernel: loop2: detected capacity change from 0 to 140992 May 15 23:56:15.869900 kernel: loop3: detected capacity change from 0 to 221472 May 15 23:56:15.886915 kernel: loop4: detected capacity change from 0 to 138184 May 15 23:56:15.900900 kernel: loop5: detected capacity change from 0 to 140992 May 15 23:56:15.914334 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 23:56:15.915128 (sd-merge)[1320]: Merged extensions into '/usr'. May 15 23:56:15.920399 systemd[1]: Reloading requested from client PID 1308 ('systemd-sysext') (unit systemd-sysext.service)... May 15 23:56:15.920583 systemd[1]: Reloading... May 15 23:56:16.022997 zram_generator::config[1351]: No configuration found. May 15 23:56:16.202640 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:56:16.218150 ldconfig[1305]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 23:56:16.281681 systemd[1]: Reloading finished in 360 ms. May 15 23:56:16.304475 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 23:56:16.306681 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 23:56:16.322215 systemd[1]: Starting ensure-sysext.service... May 15 23:56:16.326540 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:56:16.336546 systemd[1]: Reloading requested from client PID 1392 ('systemctl') (unit ensure-sysext.service)... May 15 23:56:16.336564 systemd[1]: Reloading... May 15 23:56:16.371854 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 23:56:16.372399 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 23:56:16.373780 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 23:56:16.375998 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. May 15 23:56:16.376186 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. May 15 23:56:16.381811 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:56:16.381979 systemd-tmpfiles[1393]: Skipping /boot May 15 23:56:16.388893 zram_generator::config[1422]: No configuration found. May 15 23:56:16.398365 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:56:16.398382 systemd-tmpfiles[1393]: Skipping /boot May 15 23:56:16.545332 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:56:16.639472 systemd[1]: Reloading finished in 302 ms. May 15 23:56:16.674988 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:56:16.697377 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:56:16.776768 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 23:56:16.780464 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 23:56:16.785265 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:56:16.791465 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 23:56:16.797645 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:56:16.800938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:56:16.802915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:56:16.811206 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:56:16.817939 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:56:16.820265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:56:16.826804 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:56:16.828627 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:56:16.829028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:56:16.832028 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:56:16.832408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:56:16.842610 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 23:56:16.853082 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 23:56:16.872037 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:56:16.873802 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:56:16.883778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:56:17.088226 systemd-networkd[1252]: eth0: Gained IPv6LL May 15 23:56:17.105149 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:56:17.113771 augenrules[1509]: No rules May 15 23:56:17.125260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:56:17.126696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:56:17.131099 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 23:56:17.132450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:56:17.135250 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 23:56:17.137786 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:56:17.138207 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:56:17.140066 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 23:56:17.150056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:56:17.150350 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:56:17.153130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:56:17.153467 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:56:17.155696 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:56:17.156010 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:56:17.157804 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:56:17.158258 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:56:17.165164 systemd[1]: Finished ensure-sysext.service. May 15 23:56:17.172713 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:56:17.172799 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:56:17.182888 systemd-resolved[1471]: Positive Trust Anchors: May 15 23:56:17.182922 systemd-resolved[1471]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:56:17.182970 systemd-resolved[1471]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:56:17.189263 systemd-resolved[1471]: Defaulting to hostname 'linux'. May 15 23:56:17.202602 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 23:56:17.204153 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 23:56:17.204334 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:56:17.206282 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 23:56:17.212793 systemd[1]: Reached target network.target - Network. May 15 23:56:17.214032 systemd[1]: Reached target network-online.target - Network is Online. May 15 23:56:17.215580 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:56:17.293952 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 23:56:18.874449 systemd-resolved[1471]: Clock change detected. Flushing caches. May 15 23:56:18.874461 systemd-timesyncd[1529]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 23:56:18.874512 systemd-timesyncd[1529]: Initial clock synchronization to Thu 2025-05-15 23:56:18.874328 UTC. May 15 23:56:18.875708 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:56:18.877284 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 23:56:18.879753 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 23:56:18.882213 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 23:56:18.883925 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 23:56:18.883976 systemd[1]: Reached target paths.target - Path Units. May 15 23:56:18.885210 systemd[1]: Reached target time-set.target - System Time Set. May 15 23:56:18.886755 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 23:56:18.888315 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 23:56:18.889984 systemd[1]: Reached target timers.target - Timer Units. May 15 23:56:18.893689 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 23:56:18.898028 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 23:56:18.900955 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 23:56:18.910098 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 23:56:18.911580 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:56:18.912914 systemd[1]: Reached target basic.target - Basic System. May 15 23:56:18.914341 systemd[1]: System is tainted: cgroupsv1 May 15 23:56:18.914390 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 23:56:18.914425 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 23:56:18.916333 systemd[1]: Starting containerd.service - containerd container runtime... May 15 23:56:18.921287 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 23:56:18.924836 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 23:56:18.931014 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 23:56:18.935778 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 23:56:18.937324 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 23:56:18.941056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:56:18.941539 jq[1540]: false May 15 23:56:18.953315 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 23:56:18.960040 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 23:56:18.961510 extend-filesystems[1541]: Found loop3 May 15 23:56:18.964078 extend-filesystems[1541]: Found loop4 May 15 23:56:18.964078 extend-filesystems[1541]: Found loop5 May 15 23:56:18.964078 extend-filesystems[1541]: Found sr0 May 15 23:56:18.964078 extend-filesystems[1541]: Found vda May 15 23:56:18.964078 extend-filesystems[1541]: Found vda1 May 15 23:56:18.964078 extend-filesystems[1541]: Found vda2 May 15 23:56:18.964078 extend-filesystems[1541]: Found vda3 May 15 23:56:18.964078 extend-filesystems[1541]: Found usr May 15 23:56:18.964078 extend-filesystems[1541]: Found vda4 May 15 23:56:18.964078 extend-filesystems[1541]: Found vda6 May 15 23:56:18.964078 extend-filesystems[1541]: Found vda7 May 15 23:56:18.964078 extend-filesystems[1541]: Found vda9 May 15 23:56:18.964078 extend-filesystems[1541]: Checking size of /dev/vda9 May 15 23:56:18.964316 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 23:56:18.981566 dbus-daemon[1538]: [system] SELinux support is enabled May 15 23:56:19.019367 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 23:56:19.026622 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 23:56:19.034722 extend-filesystems[1541]: Resized partition /dev/vda9 May 15 23:56:19.035217 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 23:56:19.039100 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 23:56:19.040444 extend-filesystems[1572]: resize2fs 1.47.1 (20-May-2024) May 15 23:56:19.042846 systemd[1]: Starting update-engine.service - Update Engine... May 15 23:56:19.048883 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 23:56:19.055067 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 23:56:19.059560 jq[1576]: true May 15 23:56:19.066044 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1254) May 15 23:56:19.066290 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 23:56:19.074465 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 23:56:19.074990 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 23:56:19.079558 systemd[1]: motdgen.service: Deactivated successfully. May 15 23:56:19.079986 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 23:56:19.082360 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 23:56:19.086290 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 23:56:19.086676 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 23:56:19.099419 (ntainerd)[1586]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 23:56:19.104128 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 23:56:19.104510 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 23:56:19.108562 jq[1585]: true May 15 23:56:19.158240 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 23:56:19.158351 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 23:56:19.158374 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 23:56:19.164121 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 23:56:19.166549 update_engine[1574]: I20250515 23:56:19.164840 1574 main.cc:92] Flatcar Update Engine starting May 15 23:56:19.164151 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 23:56:19.168797 update_engine[1574]: I20250515 23:56:19.166913 1574 update_check_scheduler.cc:74] Next update check in 4m52s May 15 23:56:19.166762 systemd[1]: Started update-engine.service - Update Engine. May 15 23:56:19.169269 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 23:56:19.170657 systemd-logind[1570]: Watching system buttons on /dev/input/event1 (Power Button) May 15 23:56:19.171143 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 23:56:19.173149 systemd-logind[1570]: New seat seat0. May 15 23:56:19.181238 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 23:56:19.183454 systemd[1]: Started systemd-logind.service - User Login Management. May 15 23:56:19.208367 tar[1584]: linux-amd64/helm May 15 23:56:19.239147 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 23:56:19.273358 locksmithd[1606]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 23:56:19.385144 extend-filesystems[1572]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 23:56:19.385144 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 23:56:19.385144 extend-filesystems[1572]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 23:56:19.390346 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 23:56:19.390506 extend-filesystems[1541]: Resized filesystem in /dev/vda9 May 15 23:56:19.393966 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 23:56:19.394386 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 23:56:19.421254 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 23:56:19.438352 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 23:56:19.448351 systemd[1]: issuegen.service: Deactivated successfully. May 15 23:56:19.448729 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 23:56:19.464180 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 23:56:19.481943 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 23:56:19.497133 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 23:56:19.508674 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 23:56:19.510716 systemd[1]: Reached target getty.target - Login Prompts. May 15 23:56:19.529906 bash[1619]: Updated "/home/core/.ssh/authorized_keys" May 15 23:56:19.532074 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 23:56:19.534625 containerd[1586]: time="2025-05-15T23:56:19.534531329Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 23:56:19.541058 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 23:56:19.563885 containerd[1586]: time="2025-05-15T23:56:19.563802553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 23:56:19.566233 containerd[1586]: time="2025-05-15T23:56:19.565956443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 23:56:19.566233 containerd[1586]: time="2025-05-15T23:56:19.565989245Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 23:56:19.566233 containerd[1586]: time="2025-05-15T23:56:19.566005275Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 23:56:19.566233 containerd[1586]: time="2025-05-15T23:56:19.566201162Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 23:56:19.566233 containerd[1586]: time="2025-05-15T23:56:19.566216080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 23:56:19.566442 containerd[1586]: time="2025-05-15T23:56:19.566283527Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:56:19.566442 containerd[1586]: time="2025-05-15T23:56:19.566295058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 23:56:19.567380 containerd[1586]: time="2025-05-15T23:56:19.566577618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:56:19.567380 containerd[1586]: time="2025-05-15T23:56:19.566595011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 23:56:19.567380 containerd[1586]: time="2025-05-15T23:56:19.566607384Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:56:19.567380 containerd[1586]: time="2025-05-15T23:56:19.566617213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 23:56:19.567380 containerd[1586]: time="2025-05-15T23:56:19.566708564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 23:56:19.567380 containerd[1586]: time="2025-05-15T23:56:19.566952281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 23:56:19.567380 containerd[1586]: time="2025-05-15T23:56:19.567105398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:56:19.567380 containerd[1586]: time="2025-05-15T23:56:19.567117020Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 23:56:19.567380 containerd[1586]: time="2025-05-15T23:56:19.567221246Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 23:56:19.567380 containerd[1586]: time="2025-05-15T23:56:19.567275347Z" level=info msg="metadata content store policy set" policy=shared May 15 23:56:19.637954 containerd[1586]: time="2025-05-15T23:56:19.637883678Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 23:56:19.638096 containerd[1586]: time="2025-05-15T23:56:19.637976923Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 23:56:19.638096 containerd[1586]: time="2025-05-15T23:56:19.637999666Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 23:56:19.638096 containerd[1586]: time="2025-05-15T23:56:19.638020415Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 23:56:19.638096 containerd[1586]: time="2025-05-15T23:56:19.638041805Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 23:56:19.638326 containerd[1586]: time="2025-05-15T23:56:19.638287365Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642281076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642550732Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642578284Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642597740Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642616025Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642634219Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642651912Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642670567Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642689152Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642705332Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642721042Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642735970Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642761017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 23:56:19.642888 containerd[1586]: time="2025-05-15T23:56:19.642780173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643229 containerd[1586]: time="2025-05-15T23:56:19.642794660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643229 containerd[1586]: time="2025-05-15T23:56:19.642810730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643229 containerd[1586]: time="2025-05-15T23:56:19.642825277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643229 containerd[1586]: time="2025-05-15T23:56:19.642842399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643363 containerd[1586]: time="2025-05-15T23:56:19.643346665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643430 containerd[1586]: time="2025-05-15T23:56:19.643418159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643481 containerd[1586]: time="2025-05-15T23:56:19.643469656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643583 containerd[1586]: time="2025-05-15T23:56:19.643565636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643644 containerd[1586]: time="2025-05-15T23:56:19.643629646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643722 containerd[1586]: time="2025-05-15T23:56:19.643698014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643782 containerd[1586]: time="2025-05-15T23:56:19.643769969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643831 containerd[1586]: time="2025-05-15T23:56:19.643820995Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 23:56:19.643913 containerd[1586]: time="2025-05-15T23:56:19.643897408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 23:56:19.643983 containerd[1586]: time="2025-05-15T23:56:19.643967670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 23:56:19.644063 containerd[1586]: time="2025-05-15T23:56:19.644034225Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 23:56:19.644197 containerd[1586]: time="2025-05-15T23:56:19.644097744Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 23:56:19.644197 containerd[1586]: time="2025-05-15T23:56:19.644118333Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 23:56:19.644197 containerd[1586]: time="2025-05-15T23:56:19.644128612Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 23:56:19.644197 containerd[1586]: time="2025-05-15T23:56:19.644139903Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 23:56:19.644197 containerd[1586]: time="2025-05-15T23:56:19.644150012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 23:56:19.644197 containerd[1586]: time="2025-05-15T23:56:19.644162726Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 23:56:19.644197 containerd[1586]: time="2025-05-15T23:56:19.644173797Z" level=info msg="NRI interface is disabled by configuration." May 15 23:56:19.644197 containerd[1586]: time="2025-05-15T23:56:19.644189717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 23:56:19.644524 containerd[1586]: time="2025-05-15T23:56:19.644469722Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 23:56:19.644524 containerd[1586]: time="2025-05-15T23:56:19.644517491Z" level=info msg="Connect containerd service" May 15 23:56:19.644724 containerd[1586]: time="2025-05-15T23:56:19.644558228Z" level=info msg="using legacy CRI server" May 15 23:56:19.644724 containerd[1586]: time="2025-05-15T23:56:19.644565391Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 23:56:19.644724 containerd[1586]: time="2025-05-15T23:56:19.644717096Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 23:56:19.645305 containerd[1586]: time="2025-05-15T23:56:19.645281775Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:56:19.645665 containerd[1586]: time="2025-05-15T23:56:19.645593019Z" level=info msg="Start subscribing containerd event" May 15 23:56:19.645665 containerd[1586]: time="2025-05-15T23:56:19.645637242Z" level=info msg="Start recovering state" May 15 23:56:19.645742 containerd[1586]: time="2025-05-15T23:56:19.645693788Z" level=info msg="Start event monitor" May 15 23:56:19.645742 containerd[1586]: time="2025-05-15T23:56:19.645712112Z" level=info msg="Start snapshots syncer" May 15 23:56:19.645742 containerd[1586]: time="2025-05-15T23:56:19.645721009Z" level=info msg="Start cni network conf syncer for default" May 15 23:56:19.645742 containerd[1586]: time="2025-05-15T23:56:19.645728433Z" level=info msg="Start streaming server" May 15 23:56:19.646123 containerd[1586]: time="2025-05-15T23:56:19.646091303Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 23:56:19.646274 containerd[1586]: time="2025-05-15T23:56:19.646164140Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 23:56:19.646966 containerd[1586]: time="2025-05-15T23:56:19.646905460Z" level=info msg="containerd successfully booted in 0.116632s" May 15 23:56:19.647755 systemd[1]: Started containerd.service - containerd container runtime. May 15 23:56:19.723519 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 23:56:19.730298 systemd[1]: Started sshd@0-10.0.0.111:22-10.0.0.1:52166.service - OpenSSH per-connection server daemon (10.0.0.1:52166). May 15 23:56:19.782174 tar[1584]: linux-amd64/LICENSE May 15 23:56:19.782174 tar[1584]: linux-amd64/README.md May 15 23:56:19.802562 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 23:56:19.818215 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 52166 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:56:19.821473 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:56:19.833575 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 23:56:19.844317 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 23:56:19.849289 systemd-logind[1570]: New session 1 of user core. May 15 23:56:19.860451 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 23:56:19.874229 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 23:56:19.881234 (systemd)[1670]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 23:56:20.026707 systemd[1670]: Queued start job for default target default.target. May 15 23:56:20.027330 systemd[1670]: Created slice app.slice - User Application Slice. May 15 23:56:20.027355 systemd[1670]: Reached target paths.target - Paths. May 15 23:56:20.027371 systemd[1670]: Reached target timers.target - Timers. May 15 23:56:20.040065 systemd[1670]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 23:56:20.050724 systemd[1670]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 23:56:20.050812 systemd[1670]: Reached target sockets.target - Sockets. May 15 23:56:20.050830 systemd[1670]: Reached target basic.target - Basic System. May 15 23:56:20.050906 systemd[1670]: Reached target default.target - Main User Target. May 15 23:56:20.050951 systemd[1670]: Startup finished in 159ms. May 15 23:56:20.051153 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 23:56:20.070473 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 23:56:20.132327 systemd[1]: Started sshd@1-10.0.0.111:22-10.0.0.1:52180.service - OpenSSH per-connection server daemon (10.0.0.1:52180). May 15 23:56:20.185553 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 52180 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:56:20.187795 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:56:20.198080 systemd-logind[1570]: New session 2 of user core. May 15 23:56:20.206493 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 23:56:20.278417 sshd[1685]: Connection closed by 10.0.0.1 port 52180 May 15 23:56:20.278996 sshd-session[1682]: pam_unix(sshd:session): session closed for user core May 15 23:56:20.289656 systemd[1]: Started sshd@2-10.0.0.111:22-10.0.0.1:52184.service - OpenSSH per-connection server daemon (10.0.0.1:52184). May 15 23:56:20.292617 systemd[1]: sshd@1-10.0.0.111:22-10.0.0.1:52180.service: Deactivated successfully. May 15 23:56:20.295046 systemd[1]: session-2.scope: Deactivated successfully. May 15 23:56:20.295880 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. May 15 23:56:20.298488 systemd-logind[1570]: Removed session 2. May 15 23:56:20.342563 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 52184 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:56:20.344836 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:56:20.349833 systemd-logind[1570]: New session 3 of user core. May 15 23:56:20.363348 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 23:56:20.425095 sshd[1693]: Connection closed by 10.0.0.1 port 52184 May 15 23:56:20.425729 sshd-session[1687]: pam_unix(sshd:session): session closed for user core May 15 23:56:20.429382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:56:20.431841 systemd[1]: sshd@2-10.0.0.111:22-10.0.0.1:52184.service: Deactivated successfully. May 15 23:56:20.432652 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:56:20.436162 systemd[1]: session-3.scope: Deactivated successfully. May 15 23:56:20.437297 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. May 15 23:56:20.437525 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 23:56:20.439951 systemd[1]: Startup finished in 9.974s (kernel) + 6.258s (userspace) = 16.233s. May 15 23:56:20.440816 systemd-logind[1570]: Removed session 3. May 15 23:56:21.391086 kubelet[1703]: E0515 23:56:21.390955 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:56:21.395726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:56:21.396162 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:56:30.455477 systemd[1]: Started sshd@3-10.0.0.111:22-10.0.0.1:43326.service - OpenSSH per-connection server daemon (10.0.0.1:43326). May 15 23:56:30.511489 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 43326 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:56:30.514679 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:56:30.525457 systemd-logind[1570]: New session 4 of user core. May 15 23:56:30.535668 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 23:56:30.611574 sshd[1722]: Connection closed by 10.0.0.1 port 43326 May 15 23:56:30.612042 sshd-session[1719]: pam_unix(sshd:session): session closed for user core May 15 23:56:30.626588 systemd[1]: Started sshd@4-10.0.0.111:22-10.0.0.1:43330.service - OpenSSH per-connection server daemon (10.0.0.1:43330). May 15 23:56:30.632980 systemd[1]: sshd@3-10.0.0.111:22-10.0.0.1:43326.service: Deactivated successfully. May 15 23:56:30.636626 systemd[1]: session-4.scope: Deactivated successfully. May 15 23:56:30.640322 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. May 15 23:56:30.646388 systemd-logind[1570]: Removed session 4. May 15 23:56:30.679841 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 43330 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:56:30.682731 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:56:30.699132 systemd-logind[1570]: New session 5 of user core. May 15 23:56:30.718448 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 23:56:30.775745 sshd[1730]: Connection closed by 10.0.0.1 port 43330 May 15 23:56:30.775965 sshd-session[1724]: pam_unix(sshd:session): session closed for user core May 15 23:56:30.787364 systemd[1]: Started sshd@5-10.0.0.111:22-10.0.0.1:43336.service - OpenSSH per-connection server daemon (10.0.0.1:43336). May 15 23:56:30.788168 systemd[1]: sshd@4-10.0.0.111:22-10.0.0.1:43330.service: Deactivated successfully. May 15 23:56:30.793237 systemd[1]: session-5.scope: Deactivated successfully. May 15 23:56:30.795974 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. May 15 23:56:30.800038 systemd-logind[1570]: Removed session 5. May 15 23:56:30.851883 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 43336 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:56:30.854634 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:56:30.868733 systemd-logind[1570]: New session 6 of user core. May 15 23:56:30.878128 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 23:56:30.960648 sshd[1738]: Connection closed by 10.0.0.1 port 43336 May 15 23:56:30.962397 sshd-session[1732]: pam_unix(sshd:session): session closed for user core May 15 23:56:30.977562 systemd[1]: Started sshd@6-10.0.0.111:22-10.0.0.1:43348.service - OpenSSH per-connection server daemon (10.0.0.1:43348). May 15 23:56:30.978288 systemd[1]: sshd@5-10.0.0.111:22-10.0.0.1:43336.service: Deactivated successfully. May 15 23:56:30.995633 systemd[1]: session-6.scope: Deactivated successfully. May 15 23:56:30.996962 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. May 15 23:56:31.004528 systemd-logind[1570]: Removed session 6. May 15 23:56:31.068820 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 43348 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:56:31.072419 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:56:31.106699 systemd-logind[1570]: New session 7 of user core. May 15 23:56:31.111495 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 23:56:31.232494 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 23:56:31.236824 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:56:31.267436 sudo[1747]: pam_unix(sudo:session): session closed for user root May 15 23:56:31.272939 sshd[1746]: Connection closed by 10.0.0.1 port 43348 May 15 23:56:31.274047 sshd-session[1740]: pam_unix(sshd:session): session closed for user core May 15 23:56:31.297728 systemd[1]: Started sshd@7-10.0.0.111:22-10.0.0.1:43360.service - OpenSSH per-connection server daemon (10.0.0.1:43360). May 15 23:56:31.310404 systemd[1]: sshd@6-10.0.0.111:22-10.0.0.1:43348.service: Deactivated successfully. May 15 23:56:31.312764 systemd[1]: session-7.scope: Deactivated successfully. May 15 23:56:31.314636 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. May 15 23:56:31.320157 systemd-logind[1570]: Removed session 7. May 15 23:56:31.372167 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 43360 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:56:31.376256 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:56:31.392980 systemd-logind[1570]: New session 8 of user core. May 15 23:56:31.406131 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 23:56:31.407029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 23:56:31.415288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:56:31.488696 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 23:56:31.494248 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:56:31.505806 sudo[1761]: pam_unix(sudo:session): session closed for user root May 15 23:56:31.519978 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 23:56:31.523794 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:56:31.619610 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:56:31.714768 augenrules[1783]: No rules May 15 23:56:31.723530 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:56:31.724471 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:56:31.731904 sudo[1760]: pam_unix(sudo:session): session closed for user root May 15 23:56:31.736884 sshd[1756]: Connection closed by 10.0.0.1 port 43360 May 15 23:56:31.737256 sshd-session[1749]: pam_unix(sshd:session): session closed for user core May 15 23:56:31.744681 systemd[1]: sshd@7-10.0.0.111:22-10.0.0.1:43360.service: Deactivated successfully. May 15 23:56:31.753068 systemd[1]: session-8.scope: Deactivated successfully. May 15 23:56:31.761205 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. May 15 23:56:31.765320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:56:31.769503 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:56:31.772310 systemd[1]: Started sshd@8-10.0.0.111:22-10.0.0.1:43372.service - OpenSSH per-connection server daemon (10.0.0.1:43372). May 15 23:56:31.773069 systemd-logind[1570]: Removed session 8. May 15 23:56:31.852420 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 43372 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:56:31.858589 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:56:31.904169 systemd-logind[1570]: New session 9 of user core. May 15 23:56:31.905549 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 23:56:31.972653 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 23:56:31.981336 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:56:32.141355 kubelet[1798]: E0515 23:56:32.141090 1798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:56:32.169703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:56:32.169996 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:56:33.431446 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 23:56:33.432271 (dockerd)[1835]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 23:56:35.204765 dockerd[1835]: time="2025-05-15T23:56:35.201399165Z" level=info msg="Starting up" May 15 23:56:35.970485 dockerd[1835]: time="2025-05-15T23:56:35.970415159Z" level=info msg="Loading containers: start." May 15 23:56:36.444800 kernel: Initializing XFRM netlink socket May 15 23:56:36.769191 systemd-networkd[1252]: docker0: Link UP May 15 23:56:36.891639 dockerd[1835]: time="2025-05-15T23:56:36.891316897Z" level=info msg="Loading containers: done." May 15 23:56:36.950103 dockerd[1835]: time="2025-05-15T23:56:36.949989160Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 23:56:36.950341 dockerd[1835]: time="2025-05-15T23:56:36.950207359Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 15 23:56:36.950406 dockerd[1835]: time="2025-05-15T23:56:36.950379943Z" level=info msg="Daemon has completed initialization" May 15 23:56:37.067952 dockerd[1835]: time="2025-05-15T23:56:37.066469704Z" level=info msg="API listen on /run/docker.sock" May 15 23:56:37.070102 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 23:56:38.902512 containerd[1586]: time="2025-05-15T23:56:38.901901990Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 15 23:56:39.897203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870652234.mount: Deactivated successfully. May 15 23:56:42.329280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 23:56:42.371496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:56:42.703123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:56:42.707261 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:56:42.843414 kubelet[2098]: E0515 23:56:42.842233 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:56:42.847987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:56:42.848335 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:56:44.644346 containerd[1586]: time="2025-05-15T23:56:44.642916913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:56:44.647189 containerd[1586]: time="2025-05-15T23:56:44.647085562Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 15 23:56:44.649519 containerd[1586]: time="2025-05-15T23:56:44.649410483Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:56:44.659695 containerd[1586]: time="2025-05-15T23:56:44.656565324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:56:44.659695 containerd[1586]: time="2025-05-15T23:56:44.659422814Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 5.75745996s" May 15 23:56:44.659695 containerd[1586]: time="2025-05-15T23:56:44.659483477Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 15 23:56:44.660541 containerd[1586]: time="2025-05-15T23:56:44.660505024Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 15 23:56:47.361975 containerd[1586]: time="2025-05-15T23:56:47.361882055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:56:47.363822 containerd[1586]: time="2025-05-15T23:56:47.363645462Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 15 23:56:47.365704 containerd[1586]: time="2025-05-15T23:56:47.365563580Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:56:47.369222 containerd[1586]: time="2025-05-15T23:56:47.369041935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:56:47.370373 containerd[1586]: time="2025-05-15T23:56:47.370241595Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 2.709618019s" May 15 23:56:47.370373 containerd[1586]: time="2025-05-15T23:56:47.370286679Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 15 23:56:47.371093 containerd[1586]: time="2025-05-15T23:56:47.370783852Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 15 23:56:49.848633 containerd[1586]: time="2025-05-15T23:56:49.848567856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:56:49.852808 containerd[1586]: time="2025-05-15T23:56:49.852726617Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 15 23:56:49.855559 containerd[1586]: time="2025-05-15T23:56:49.855405001Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:56:49.865808 containerd[1586]: time="2025-05-15T23:56:49.863904814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:56:49.869419 containerd[1586]: time="2025-05-15T23:56:49.868299027Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 2.49747529s" May 15 23:56:49.869419 containerd[1586]: time="2025-05-15T23:56:49.868362415Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 15 23:56:49.869419 containerd[1586]: time="2025-05-15T23:56:49.868984582Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 15 23:56:51.501297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1849186788.mount: Deactivated successfully. May 15 23:56:52.722573 containerd[1586]: time="2025-05-15T23:56:52.722488696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:56:52.727466 containerd[1586]: time="2025-05-15T23:56:52.727334992Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 15 23:56:52.731911 containerd[1586]: time="2025-05-15T23:56:52.731728994Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:56:52.737127 containerd[1586]: time="2025-05-15T23:56:52.736840899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:56:52.737869 containerd[1586]: time="2025-05-15T23:56:52.737625210Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 2.868611653s" May 15 23:56:52.737869 containerd[1586]: time="2025-05-15T23:56:52.737679835Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 15 23:56:52.738467 containerd[1586]: time="2025-05-15T23:56:52.738260185Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 23:56:53.074467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 23:56:53.086425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:56:53.307642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:56:53.313602 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:56:53.374238 kubelet[2136]: E0515 23:56:53.373996 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:56:53.379093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:56:53.379454 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:56:54.088412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361202332.mount: Deactivated successfully. May 15 23:57:03.573790 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 15 23:57:03.587131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:57:03.969182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:57:03.974195 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:57:04.009630 kubelet[2170]: E0515 23:57:04.009557 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:57:04.014611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:57:04.014910 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:57:04.235682 update_engine[1574]: I20250515 23:57:04.235421 1574 update_attempter.cc:509] Updating boot flags... May 15 23:57:04.867899 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2185) May 15 23:57:04.926893 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2188) May 15 23:57:04.965896 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2188) May 15 23:57:07.215336 containerd[1586]: time="2025-05-15T23:57:07.215257273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:57:07.262305 containerd[1586]: time="2025-05-15T23:57:07.262202085Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 15 23:57:07.317931 containerd[1586]: time="2025-05-15T23:57:07.317834898Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:57:07.374765 containerd[1586]: time="2025-05-15T23:57:07.374701766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:57:07.376150 containerd[1586]: time="2025-05-15T23:57:07.376122001Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 14.637825155s" May 15 23:57:07.376150 containerd[1586]: time="2025-05-15T23:57:07.376150595Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 23:57:07.376759 containerd[1586]: time="2025-05-15T23:57:07.376724720Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 23:57:08.940182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348460381.mount: Deactivated successfully. May 15 23:57:09.004905 containerd[1586]: time="2025-05-15T23:57:09.004788815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:57:09.008502 containerd[1586]: time="2025-05-15T23:57:09.008420033Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 23:57:09.011881 containerd[1586]: time="2025-05-15T23:57:09.011778907Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:57:09.027942 containerd[1586]: time="2025-05-15T23:57:09.027790105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:57:09.029107 containerd[1586]: time="2025-05-15T23:57:09.028880534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.652096953s" May 15 23:57:09.029107 containerd[1586]: time="2025-05-15T23:57:09.028931811Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 23:57:09.029575 containerd[1586]: time="2025-05-15T23:57:09.029517137Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 23:57:11.860968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156641544.mount: Deactivated successfully. May 15 23:57:14.075490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 15 23:57:14.091368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:57:15.662154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:57:15.671951 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:57:15.799688 kubelet[2263]: E0515 23:57:15.799587 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:57:15.803873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:57:15.804171 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:57:25.070737 containerd[1586]: time="2025-05-15T23:57:25.069700395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:57:25.086326 containerd[1586]: time="2025-05-15T23:57:25.086248711Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 15 23:57:25.091056 containerd[1586]: time="2025-05-15T23:57:25.090967142Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:57:25.106973 containerd[1586]: time="2025-05-15T23:57:25.106787199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:57:25.110229 containerd[1586]: time="2025-05-15T23:57:25.109310303Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 16.079750135s" May 15 23:57:25.110229 containerd[1586]: time="2025-05-15T23:57:25.109374543Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 15 23:57:25.824530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 15 23:57:25.836240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:57:26.054789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:57:26.079283 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:57:26.124461 kubelet[2350]: E0515 23:57:26.124374 2350 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:57:26.129014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:57:26.129337 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:57:28.126019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:57:28.145301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:57:28.185061 systemd[1]: Reloading requested from client PID 2367 ('systemctl') (unit session-9.scope)... May 15 23:57:28.185083 systemd[1]: Reloading... May 15 23:57:28.287107 zram_generator::config[2406]: No configuration found. May 15 23:57:28.817791 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:57:28.928609 systemd[1]: Reloading finished in 743 ms. May 15 23:57:29.001617 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 23:57:29.001778 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 23:57:29.002341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:57:29.026138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:57:29.388361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:57:29.401568 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:57:29.511691 kubelet[2465]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:57:29.511691 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 23:57:29.514404 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:57:29.514404 kubelet[2465]: I0515 23:57:29.513030 2465 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:57:29.921201 kubelet[2465]: I0515 23:57:29.920940 2465 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 15 23:57:29.921201 kubelet[2465]: I0515 23:57:29.921051 2465 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:57:29.921414 kubelet[2465]: I0515 23:57:29.921323 2465 server.go:934] "Client rotation is on, will bootstrap in background" May 15 23:57:30.538494 kubelet[2465]: E0515 23:57:30.538441 2465 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:30.542104 kubelet[2465]: I0515 23:57:30.539519 2465 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:57:30.572078 kubelet[2465]: E0515 23:57:30.569836 2465 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 23:57:30.572078 kubelet[2465]: I0515 23:57:30.569908 2465 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 23:57:30.587987 kubelet[2465]: I0515 23:57:30.587938 2465 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:57:30.588542 kubelet[2465]: I0515 23:57:30.588402 2465 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 23:57:30.588599 kubelet[2465]: I0515 23:57:30.588556 2465 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:57:30.588935 kubelet[2465]: I0515 23:57:30.588595 2465 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 15 23:57:30.588935 kubelet[2465]: I0515 23:57:30.588899 2465 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:57:30.588935 kubelet[2465]: I0515 23:57:30.588910 2465 container_manager_linux.go:300] "Creating device plugin manager" May 15 23:57:30.589171 kubelet[2465]: I0515 23:57:30.589099 2465 state_mem.go:36] "Initialized new in-memory state store" May 15 23:57:30.613123 kubelet[2465]: I0515 23:57:30.606695 2465 kubelet.go:408] "Attempting to sync node with API server" May 15 23:57:30.613123 kubelet[2465]: I0515 23:57:30.606795 2465 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:57:30.613123 kubelet[2465]: I0515 23:57:30.607435 2465 kubelet.go:314] "Adding apiserver pod source" May 15 23:57:30.613123 kubelet[2465]: I0515 23:57:30.607469 2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:57:30.616964 kubelet[2465]: I0515 23:57:30.615258 2465 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 23:57:30.616964 kubelet[2465]: I0515 23:57:30.615977 2465 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:57:30.616964 kubelet[2465]: W0515 23:57:30.616058 2465 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 23:57:30.622897 kubelet[2465]: W0515 23:57:30.622804 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 15 23:57:30.622897 kubelet[2465]: E0515 23:57:30.622905 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:30.624347 kubelet[2465]: I0515 23:57:30.624317 2465 server.go:1274] "Started kubelet" May 15 23:57:30.624808 kubelet[2465]: I0515 23:57:30.624771 2465 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:57:30.629037 kubelet[2465]: W0515 23:57:30.626009 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 15 23:57:30.629037 kubelet[2465]: E0515 23:57:30.626103 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:30.629037 kubelet[2465]: I0515 23:57:30.626324 2465 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:57:30.629037 kubelet[2465]: I0515 23:57:30.626716 2465 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:57:30.633995 kubelet[2465]: I0515 23:57:30.630400 2465 server.go:449] "Adding debug handlers to kubelet server" May 15 23:57:30.633995 kubelet[2465]: I0515 23:57:30.633673 2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:57:30.639627 kubelet[2465]: I0515 23:57:30.639457 2465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:57:30.648124 kubelet[2465]: I0515 23:57:30.646282 2465 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 23:57:30.648124 kubelet[2465]: E0515 23:57:30.646618 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:30.648124 kubelet[2465]: I0515 23:57:30.647321 2465 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 15 23:57:30.648124 kubelet[2465]: I0515 23:57:30.647396 2465 reconciler.go:26] "Reconciler: start to sync state" May 15 23:57:30.648124 kubelet[2465]: W0515 23:57:30.647775 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 15 23:57:30.648124 kubelet[2465]: E0515 23:57:30.647831 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:30.648124 kubelet[2465]: E0515 23:57:30.647913 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="200ms" May 15 23:57:30.686981 kubelet[2465]: I0515 23:57:30.684943 2465 factory.go:221] Registration of the systemd container factory successfully May 15 23:57:30.686981 kubelet[2465]: I0515 23:57:30.685123 2465 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:57:30.689884 kubelet[2465]: E0515 23:57:30.689268 2465 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:57:30.692027 kubelet[2465]: I0515 23:57:30.691886 2465 factory.go:221] Registration of the containerd container factory successfully May 15 23:57:30.699986 kubelet[2465]: E0515 23:57:30.691039 2465 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd8b49983262a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 23:57:30.624263722 +0000 UTC m=+1.217987184,LastTimestamp:2025-05-15 23:57:30.624263722 +0000 UTC m=+1.217987184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 23:57:30.732674 kubelet[2465]: I0515 23:57:30.732336 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:57:30.737409 kubelet[2465]: I0515 23:57:30.736595 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:57:30.737409 kubelet[2465]: I0515 23:57:30.736636 2465 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 23:57:30.737409 kubelet[2465]: I0515 23:57:30.736662 2465 kubelet.go:2321] "Starting kubelet main sync loop" May 15 23:57:30.737409 kubelet[2465]: E0515 23:57:30.736717 2465 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:57:30.737409 kubelet[2465]: W0515 23:57:30.737302 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 15 23:57:30.737409 kubelet[2465]: E0515 23:57:30.737341 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:30.747205 kubelet[2465]: E0515 23:57:30.747150 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:30.754710 kubelet[2465]: I0515 23:57:30.754680 2465 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 23:57:30.754990 kubelet[2465]: I0515 23:57:30.754974 2465 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 23:57:30.755084 kubelet[2465]: I0515 23:57:30.755073 2465 state_mem.go:36] "Initialized new in-memory state store" May 15 23:57:30.840598 kubelet[2465]: E0515 23:57:30.840079 2465 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 23:57:30.849677 kubelet[2465]: E0515 23:57:30.848756 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="400ms" May 15 23:57:30.850026 kubelet[2465]: E0515 23:57:30.849106 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:30.950209 kubelet[2465]: E0515 23:57:30.950078 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:31.040961 kubelet[2465]: E0515 23:57:31.040831 2465 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 23:57:31.050774 kubelet[2465]: E0515 23:57:31.050657 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:31.157640 kubelet[2465]: E0515 23:57:31.157299 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:31.252096 kubelet[2465]: E0515 23:57:31.251098 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="800ms" May 15 23:57:31.261234 kubelet[2465]: E0515 23:57:31.261123 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:31.366040 kubelet[2465]: E0515 23:57:31.361605 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:31.442983 kubelet[2465]: E0515 23:57:31.442030 2465 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 23:57:31.466677 kubelet[2465]: W0515 23:57:31.465582 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 15 23:57:31.466677 kubelet[2465]: E0515 23:57:31.466379 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:31.468054 kubelet[2465]: E0515 23:57:31.466103 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:31.571523 kubelet[2465]: E0515 23:57:31.571386 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:31.671683 kubelet[2465]: E0515 23:57:31.671551 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:31.709569 kubelet[2465]: W0515 23:57:31.709357 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 15 23:57:31.709569 kubelet[2465]: E0515 23:57:31.709454 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:31.771798 kubelet[2465]: E0515 23:57:31.771696 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:31.872831 kubelet[2465]: E0515 23:57:31.872710 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:31.899868 kubelet[2465]: W0515 23:57:31.899798 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 15 23:57:31.900042 kubelet[2465]: E0515 23:57:31.899884 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:31.973095 kubelet[2465]: E0515 23:57:31.972880 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:32.052448 kubelet[2465]: E0515 23:57:32.052340 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="1.6s" May 15 23:57:32.073990 kubelet[2465]: E0515 23:57:32.073883 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:32.089698 kubelet[2465]: W0515 23:57:32.089592 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 15 23:57:32.089698 kubelet[2465]: E0515 23:57:32.089699 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:32.176973 kubelet[2465]: E0515 23:57:32.174269 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:32.213919 kubelet[2465]: I0515 23:57:32.213673 2465 policy_none.go:49] "None policy: Start" May 15 23:57:32.215026 kubelet[2465]: I0515 23:57:32.214724 2465 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 23:57:32.215026 kubelet[2465]: I0515 23:57:32.214765 2465 state_mem.go:35] "Initializing new in-memory state store" May 15 23:57:32.242843 kubelet[2465]: E0515 23:57:32.242663 2465 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 23:57:32.275362 kubelet[2465]: E0515 23:57:32.275309 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:32.357557 kubelet[2465]: I0515 23:57:32.357513 2465 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:57:32.357813 kubelet[2465]: I0515 23:57:32.357797 2465 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:57:32.357869 kubelet[2465]: I0515 23:57:32.357814 2465 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:57:32.358633 kubelet[2465]: I0515 23:57:32.358596 2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:57:32.359929 kubelet[2465]: E0515 23:57:32.359901 2465 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 23:57:32.460345 kubelet[2465]: I0515 23:57:32.460307 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:57:32.460754 kubelet[2465]: E0515 23:57:32.460714 2465 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 15 23:57:32.662254 kubelet[2465]: E0515 23:57:32.662199 2465 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:32.662712 kubelet[2465]: I0515 23:57:32.662506 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:57:32.662799 kubelet[2465]: E0515 23:57:32.662755 2465 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 15 23:57:33.064465 kubelet[2465]: I0515 23:57:33.064314 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:57:33.064752 kubelet[2465]: E0515 23:57:33.064654 2465 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 15 23:57:33.653650 kubelet[2465]: E0515 23:57:33.653523 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="3.2s" May 15 23:57:33.866729 kubelet[2465]: I0515 23:57:33.866350 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:57:33.867279 kubelet[2465]: E0515 23:57:33.866897 2465 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 15 23:57:33.873125 kubelet[2465]: I0515 23:57:33.873058 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:57:33.873125 kubelet[2465]: I0515 23:57:33.873125 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80266a3f23566c1417df5038d67966b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"80266a3f23566c1417df5038d67966b9\") " pod="kube-system/kube-apiserver-localhost" May 15 23:57:33.873336 kubelet[2465]: I0515 23:57:33.873158 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:57:33.873336 kubelet[2465]: I0515 23:57:33.873188 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:57:33.873336 kubelet[2465]: I0515 23:57:33.873212 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 15 23:57:33.873336 kubelet[2465]: I0515 23:57:33.873233 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80266a3f23566c1417df5038d67966b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"80266a3f23566c1417df5038d67966b9\") " pod="kube-system/kube-apiserver-localhost" May 15 23:57:33.873336 kubelet[2465]: I0515 23:57:33.873256 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80266a3f23566c1417df5038d67966b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"80266a3f23566c1417df5038d67966b9\") " pod="kube-system/kube-apiserver-localhost" May 15 23:57:33.873482 kubelet[2465]: I0515 23:57:33.873279 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:57:33.873482 kubelet[2465]: I0515 23:57:33.873304 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:57:34.151343 kubelet[2465]: E0515 23:57:34.151274 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:34.152254 containerd[1586]: time="2025-05-15T23:57:34.152186871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:80266a3f23566c1417df5038d67966b9,Namespace:kube-system,Attempt:0,}" May 15 23:57:34.153535 kubelet[2465]: E0515 23:57:34.153493 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:34.154070 containerd[1586]: time="2025-05-15T23:57:34.154035714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 15 23:57:34.157667 kubelet[2465]: E0515 23:57:34.157395 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:34.157917 containerd[1586]: time="2025-05-15T23:57:34.157744900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 15 23:57:34.196080 kubelet[2465]: W0515 23:57:34.195986 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 15 23:57:34.196234 kubelet[2465]: E0515 23:57:34.196100 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:34.263389 kubelet[2465]: W0515 23:57:34.263191 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 15 23:57:34.263389 kubelet[2465]: E0515 23:57:34.263299 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:34.551682 kubelet[2465]: W0515 23:57:34.551500 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 15 23:57:34.551682 kubelet[2465]: E0515 23:57:34.551583 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:34.877957 kubelet[2465]: W0515 23:57:34.877828 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 15 23:57:34.877957 kubelet[2465]: E0515 23:57:34.877963 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:35.468980 kubelet[2465]: I0515 23:57:35.468941 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:57:35.469429 kubelet[2465]: E0515 23:57:35.469387 2465 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 15 23:57:36.451376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194145563.mount: Deactivated successfully. May 15 23:57:36.513217 containerd[1586]: time="2025-05-15T23:57:36.513127714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:57:36.531885 containerd[1586]: time="2025-05-15T23:57:36.531753062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 15 23:57:36.550890 containerd[1586]: time="2025-05-15T23:57:36.550804861Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:57:36.555496 containerd[1586]: time="2025-05-15T23:57:36.555420880Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:57:36.568890 containerd[1586]: time="2025-05-15T23:57:36.568782312Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 23:57:36.579789 containerd[1586]: time="2025-05-15T23:57:36.579383130Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:57:36.589288 containerd[1586]: time="2025-05-15T23:57:36.589210866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:57:36.590525 containerd[1586]: time="2025-05-15T23:57:36.590466915Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.436346893s" May 15 23:57:36.597267 containerd[1586]: time="2025-05-15T23:57:36.597184861Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 23:57:36.613214 containerd[1586]: time="2025-05-15T23:57:36.613154402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.460834732s" May 15 23:57:36.662562 containerd[1586]: time="2025-05-15T23:57:36.662461779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.504641426s" May 15 23:57:36.854712 kubelet[2465]: E0515 23:57:36.854641 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="6.4s" May 15 23:57:36.997773 kubelet[2465]: E0515 23:57:36.997688 2465 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 15 23:57:37.355896 containerd[1586]: time="2025-05-15T23:57:37.355552437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:57:37.355896 containerd[1586]: time="2025-05-15T23:57:37.355642446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:57:37.355896 containerd[1586]: time="2025-05-15T23:57:37.355655009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:57:37.355896 containerd[1586]: time="2025-05-15T23:57:37.355767701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:57:37.359707 containerd[1586]: time="2025-05-15T23:57:37.359247145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:57:37.359707 containerd[1586]: time="2025-05-15T23:57:37.359284856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:57:37.359707 containerd[1586]: time="2025-05-15T23:57:37.359295666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:57:37.359707 containerd[1586]: time="2025-05-15T23:57:37.359368142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:57:37.545906 containerd[1586]: time="2025-05-15T23:57:37.545869238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:80266a3f23566c1417df5038d67966b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"14df9f6ffe7d7c5b9417e5e8105f393b494c9666a565dc0d05bae5615a937eb3\"" May 15 23:57:37.547286 kubelet[2465]: E0515 23:57:37.547260 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:37.548976 containerd[1586]: time="2025-05-15T23:57:37.548944953Z" level=info msg="CreateContainer within sandbox \"14df9f6ffe7d7c5b9417e5e8105f393b494c9666a565dc0d05bae5615a937eb3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 23:57:37.567426 containerd[1586]: time="2025-05-15T23:57:37.567367969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"fda5bcae324969d391d0bad26fc886ac0facf58fe67e96daef4dc171aef83e69\"" May 15 23:57:37.568362 kubelet[2465]: E0515 23:57:37.568328 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:37.571178 containerd[1586]: time="2025-05-15T23:57:37.570213061Z" level=info msg="CreateContainer within sandbox \"fda5bcae324969d391d0bad26fc886ac0facf58fe67e96daef4dc171aef83e69\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 23:57:37.578431 containerd[1586]: time="2025-05-15T23:57:37.578224816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:57:37.578504 containerd[1586]: time="2025-05-15T23:57:37.578437115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:57:37.578504 containerd[1586]: time="2025-05-15T23:57:37.578487950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:57:37.578663 containerd[1586]: time="2025-05-15T23:57:37.578616952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:57:37.641935 containerd[1586]: time="2025-05-15T23:57:37.641778969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1656034143950c4bdcc67a6b3f943c1306bb80c9f028bc79cc1513bf978a8b70\"" May 15 23:57:37.642475 kubelet[2465]: E0515 23:57:37.642396 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:37.644091 containerd[1586]: time="2025-05-15T23:57:37.644044013Z" level=info msg="CreateContainer within sandbox \"1656034143950c4bdcc67a6b3f943c1306bb80c9f028bc79cc1513bf978a8b70\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 23:57:37.876184 containerd[1586]: time="2025-05-15T23:57:37.876108308Z" level=info msg="CreateContainer within sandbox \"14df9f6ffe7d7c5b9417e5e8105f393b494c9666a565dc0d05bae5615a937eb3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f41657cd2f08ef7f806f400ab93ad0186437e2352efa5d2ea6cead89b0359367\"" May 15 23:57:37.877053 containerd[1586]: time="2025-05-15T23:57:37.877027965Z" level=info msg="StartContainer for \"f41657cd2f08ef7f806f400ab93ad0186437e2352efa5d2ea6cead89b0359367\"" May 15 23:57:38.029764 containerd[1586]: time="2025-05-15T23:57:38.029590938Z" level=info msg="StartContainer for \"f41657cd2f08ef7f806f400ab93ad0186437e2352efa5d2ea6cead89b0359367\" returns successfully" May 15 23:57:38.029764 containerd[1586]: time="2025-05-15T23:57:38.029678943Z" level=info msg="CreateContainer within sandbox \"fda5bcae324969d391d0bad26fc886ac0facf58fe67e96daef4dc171aef83e69\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f4273a1b9b05c5d8e2325fd03e1211fed4c65ddede62e8d235e6c70adc3fa9fa\"" May 15 23:57:38.029764 containerd[1586]: time="2025-05-15T23:57:38.029593182Z" level=info msg="CreateContainer within sandbox \"1656034143950c4bdcc67a6b3f943c1306bb80c9f028bc79cc1513bf978a8b70\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"45c3e0ad2a079efed0794f0eb954081c54e9638db27c1806193c8c3513a6645d\"" May 15 23:57:38.030588 containerd[1586]: time="2025-05-15T23:57:38.030555008Z" level=info msg="StartContainer for \"f4273a1b9b05c5d8e2325fd03e1211fed4c65ddede62e8d235e6c70adc3fa9fa\"" May 15 23:57:38.030691 containerd[1586]: time="2025-05-15T23:57:38.030570618Z" level=info msg="StartContainer for \"45c3e0ad2a079efed0794f0eb954081c54e9638db27c1806193c8c3513a6645d\"" May 15 23:57:38.224883 containerd[1586]: time="2025-05-15T23:57:38.222983369Z" level=info msg="StartContainer for \"f4273a1b9b05c5d8e2325fd03e1211fed4c65ddede62e8d235e6c70adc3fa9fa\" returns successfully" May 15 23:57:38.224883 containerd[1586]: time="2025-05-15T23:57:38.223117781Z" level=info msg="StartContainer for \"45c3e0ad2a079efed0794f0eb954081c54e9638db27c1806193c8c3513a6645d\" returns successfully" May 15 23:57:38.673063 kubelet[2465]: I0515 23:57:38.673009 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:57:38.759460 kubelet[2465]: E0515 23:57:38.759409 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:38.760892 kubelet[2465]: E0515 23:57:38.760753 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:38.765212 kubelet[2465]: E0515 23:57:38.765164 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:39.171607 kubelet[2465]: I0515 23:57:39.171523 2465 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 23:57:39.171607 kubelet[2465]: E0515 23:57:39.171588 2465 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 23:57:39.622008 kubelet[2465]: I0515 23:57:39.621946 2465 apiserver.go:52] "Watching apiserver" May 15 23:57:39.647915 kubelet[2465]: I0515 23:57:39.647875 2465 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 15 23:57:39.876798 kubelet[2465]: E0515 23:57:39.876650 2465 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 23:57:39.877326 kubelet[2465]: E0515 23:57:39.876925 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:39.877326 kubelet[2465]: E0515 23:57:39.876650 2465 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 23:57:39.877326 kubelet[2465]: E0515 23:57:39.877073 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:39.877326 kubelet[2465]: E0515 23:57:39.876664 2465 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 15 23:57:39.877326 kubelet[2465]: E0515 23:57:39.877213 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:40.881758 kubelet[2465]: E0515 23:57:40.881702 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:41.767072 kubelet[2465]: E0515 23:57:41.767026 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:42.769597 kubelet[2465]: E0515 23:57:42.769531 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:43.413259 systemd[1]: Reloading requested from client PID 2744 ('systemctl') (unit session-9.scope)... May 15 23:57:43.413287 systemd[1]: Reloading... May 15 23:57:43.483907 zram_generator::config[2783]: No configuration found. May 15 23:57:43.646662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:57:43.704335 kubelet[2465]: E0515 23:57:43.704189 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:43.752328 systemd[1]: Reloading finished in 338 ms. May 15 23:57:43.772511 kubelet[2465]: E0515 23:57:43.772017 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:43.795566 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:57:43.822282 systemd[1]: kubelet.service: Deactivated successfully. May 15 23:57:43.822767 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:57:43.834469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:57:44.023299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:57:44.031042 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:57:44.079516 kubelet[2837]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:57:44.079516 kubelet[2837]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 23:57:44.079516 kubelet[2837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:57:44.080002 kubelet[2837]: I0515 23:57:44.079587 2837 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:57:44.088045 kubelet[2837]: I0515 23:57:44.087994 2837 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 15 23:57:44.088045 kubelet[2837]: I0515 23:57:44.088029 2837 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:57:44.088330 kubelet[2837]: I0515 23:57:44.088306 2837 server.go:934] "Client rotation is on, will bootstrap in background" May 15 23:57:44.089589 kubelet[2837]: I0515 23:57:44.089558 2837 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 23:57:44.091274 kubelet[2837]: I0515 23:57:44.091245 2837 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:57:44.094834 kubelet[2837]: E0515 23:57:44.094793 2837 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 23:57:44.094834 kubelet[2837]: I0515 23:57:44.094823 2837 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 23:57:44.101924 kubelet[2837]: I0515 23:57:44.101808 2837 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:57:44.102472 kubelet[2837]: I0515 23:57:44.102449 2837 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 23:57:44.102624 kubelet[2837]: I0515 23:57:44.102594 2837 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:57:44.102831 kubelet[2837]: I0515 23:57:44.102622 2837 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 15 23:57:44.102958 kubelet[2837]: I0515 23:57:44.102837 2837 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:57:44.102958 kubelet[2837]: I0515 23:57:44.102867 2837 container_manager_linux.go:300] "Creating device plugin manager" May 15 23:57:44.102958 kubelet[2837]: I0515 23:57:44.102897 2837 state_mem.go:36] "Initialized new in-memory state store" May 15 23:57:44.103058 kubelet[2837]: I0515 23:57:44.103020 2837 kubelet.go:408] "Attempting to sync node with API server" May 15 23:57:44.103058 kubelet[2837]: I0515 23:57:44.103034 2837 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:57:44.103110 kubelet[2837]: I0515 23:57:44.103076 2837 kubelet.go:314] "Adding apiserver pod source" May 15 23:57:44.103110 kubelet[2837]: I0515 23:57:44.103089 2837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:57:44.104672 kubelet[2837]: I0515 23:57:44.104635 2837 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 23:57:44.105126 kubelet[2837]: I0515 23:57:44.105106 2837 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:57:44.107457 kubelet[2837]: I0515 23:57:44.106509 2837 server.go:1274] "Started kubelet" May 15 23:57:44.107457 kubelet[2837]: I0515 23:57:44.106990 2837 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:57:44.107457 kubelet[2837]: I0515 23:57:44.107327 2837 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:57:44.107457 kubelet[2837]: I0515 23:57:44.107404 2837 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:57:44.108731 kubelet[2837]: I0515 23:57:44.108385 2837 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:57:44.109384 kubelet[2837]: I0515 23:57:44.109346 2837 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:57:44.111780 kubelet[2837]: I0515 23:57:44.111762 2837 server.go:449] "Adding debug handlers to kubelet server" May 15 23:57:44.116090 kubelet[2837]: I0515 23:57:44.116061 2837 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 23:57:44.116216 kubelet[2837]: I0515 23:57:44.116198 2837 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 15 23:57:44.116378 kubelet[2837]: I0515 23:57:44.116360 2837 reconciler.go:26] "Reconciler: start to sync state" May 15 23:57:44.116989 kubelet[2837]: E0515 23:57:44.116958 2837 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:57:44.117120 kubelet[2837]: I0515 23:57:44.117096 2837 factory.go:221] Registration of the systemd container factory successfully May 15 23:57:44.117257 kubelet[2837]: I0515 23:57:44.117233 2837 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:57:44.121895 kubelet[2837]: I0515 23:57:44.119924 2837 factory.go:221] Registration of the containerd container factory successfully May 15 23:57:44.124353 kubelet[2837]: E0515 23:57:44.123088 2837 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:57:44.128208 kubelet[2837]: I0515 23:57:44.127982 2837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:57:44.130001 kubelet[2837]: I0515 23:57:44.129970 2837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:57:44.130001 kubelet[2837]: I0515 23:57:44.129996 2837 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 23:57:44.130115 kubelet[2837]: I0515 23:57:44.130023 2837 kubelet.go:2321] "Starting kubelet main sync loop" May 15 23:57:44.130115 kubelet[2837]: E0515 23:57:44.130078 2837 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:57:44.182550 kubelet[2837]: I0515 23:57:44.182519 2837 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 23:57:44.182550 kubelet[2837]: I0515 23:57:44.182538 2837 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 23:57:44.182550 kubelet[2837]: I0515 23:57:44.182566 2837 state_mem.go:36] "Initialized new in-memory state store" May 15 23:57:44.182764 kubelet[2837]: I0515 23:57:44.182743 2837 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 23:57:44.182797 kubelet[2837]: I0515 23:57:44.182754 2837 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 23:57:44.182797 kubelet[2837]: I0515 23:57:44.182772 2837 policy_none.go:49] "None policy: Start" May 15 23:57:44.186095 kubelet[2837]: I0515 23:57:44.186074 2837 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 23:57:44.186095 kubelet[2837]: I0515 23:57:44.186101 2837 state_mem.go:35] "Initializing new in-memory state store" May 15 23:57:44.186333 kubelet[2837]: I0515 23:57:44.186316 2837 state_mem.go:75] "Updated machine memory state" May 15 23:57:44.189036 kubelet[2837]: I0515 23:57:44.188411 2837 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:57:44.189036 kubelet[2837]: I0515 23:57:44.188641 2837 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:57:44.189036 kubelet[2837]: I0515 23:57:44.188652 2837 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:57:44.189036 kubelet[2837]: I0515 23:57:44.189030 2837 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:57:44.271032 kubelet[2837]: E0515 23:57:44.270937 2837 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 23:57:44.271389 kubelet[2837]: E0515 23:57:44.271365 2837 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 23:57:44.294422 kubelet[2837]: I0515 23:57:44.294312 2837 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:57:44.337442 kubelet[2837]: I0515 23:57:44.337406 2837 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 23:57:44.337594 kubelet[2837]: I0515 23:57:44.337499 2837 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 23:57:44.417958 kubelet[2837]: I0515 23:57:44.417891 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80266a3f23566c1417df5038d67966b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"80266a3f23566c1417df5038d67966b9\") " pod="kube-system/kube-apiserver-localhost" May 15 23:57:44.417958 kubelet[2837]: I0515 23:57:44.417945 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:57:44.417958 kubelet[2837]: I0515 23:57:44.417975 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:57:44.418188 kubelet[2837]: I0515 23:57:44.417996 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80266a3f23566c1417df5038d67966b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"80266a3f23566c1417df5038d67966b9\") " pod="kube-system/kube-apiserver-localhost" May 15 23:57:44.418188 kubelet[2837]: I0515 23:57:44.418095 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:57:44.418188 kubelet[2837]: I0515 23:57:44.418150 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:57:44.418262 kubelet[2837]: I0515 23:57:44.418199 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:57:44.418328 kubelet[2837]: I0515 23:57:44.418286 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 15 23:57:44.418367 kubelet[2837]: I0515 23:57:44.418346 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80266a3f23566c1417df5038d67966b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"80266a3f23566c1417df5038d67966b9\") " pod="kube-system/kube-apiserver-localhost" May 15 23:57:44.509052 sudo[2874]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 23:57:44.509492 sudo[2874]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 23:57:44.571029 kubelet[2837]: E0515 23:57:44.570901 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:44.572130 kubelet[2837]: E0515 23:57:44.572092 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:44.572547 kubelet[2837]: E0515 23:57:44.572526 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:45.003421 sudo[2874]: pam_unix(sudo:session): session closed for user root May 15 23:57:45.103704 kubelet[2837]: I0515 23:57:45.103663 2837 apiserver.go:52] "Watching apiserver" May 15 23:57:45.117327 kubelet[2837]: I0515 23:57:45.117259 2837 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 15 23:57:45.149265 kubelet[2837]: E0515 23:57:45.148785 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:45.149409 kubelet[2837]: E0515 23:57:45.149340 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:45.296891 kubelet[2837]: E0515 23:57:45.296672 2837 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 23:57:45.296891 kubelet[2837]: E0515 23:57:45.296877 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:45.343952 kubelet[2837]: I0515 23:57:45.343866 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.34375554 podStartE2EDuration="5.34375554s" podCreationTimestamp="2025-05-15 23:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:57:45.296563462 +0000 UTC m=+1.260474759" watchObservedRunningTime="2025-05-15 23:57:45.34375554 +0000 UTC m=+1.307666827" May 15 23:57:45.449129 kubelet[2837]: I0515 23:57:45.449021 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.448997566 podStartE2EDuration="2.448997566s" podCreationTimestamp="2025-05-15 23:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:57:45.344061884 +0000 UTC m=+1.307973171" watchObservedRunningTime="2025-05-15 23:57:45.448997566 +0000 UTC m=+1.412908854" May 15 23:57:45.471289 kubelet[2837]: I0515 23:57:45.471186 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.47116286 podStartE2EDuration="1.47116286s" podCreationTimestamp="2025-05-15 23:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:57:45.449278634 +0000 UTC m=+1.413189921" watchObservedRunningTime="2025-05-15 23:57:45.47116286 +0000 UTC m=+1.435074177" May 15 23:57:46.150978 kubelet[2837]: E0515 23:57:46.150920 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:47.695265 kubelet[2837]: E0515 23:57:47.695194 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:48.057253 sudo[1810]: pam_unix(sudo:session): session closed for user root May 15 23:57:48.059326 sshd[1804]: Connection closed by 10.0.0.1 port 43372 May 15 23:57:48.061332 sshd-session[1801]: pam_unix(sshd:session): session closed for user core May 15 23:57:48.066380 systemd[1]: sshd@8-10.0.0.111:22-10.0.0.1:43372.service: Deactivated successfully. May 15 23:57:48.069624 systemd[1]: session-9.scope: Deactivated successfully. May 15 23:57:48.069655 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. May 15 23:57:48.071565 systemd-logind[1570]: Removed session 9. May 15 23:57:48.098610 kubelet[2837]: E0515 23:57:48.098574 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:48.154780 kubelet[2837]: E0515 23:57:48.154625 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:48.155603 kubelet[2837]: E0515 23:57:48.154493 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:48.281937 kubelet[2837]: I0515 23:57:48.281838 2837 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 23:57:48.282345 containerd[1586]: time="2025-05-15T23:57:48.282301567Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 23:57:48.282930 kubelet[2837]: I0515 23:57:48.282624 2837 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 23:57:49.347364 kubelet[2837]: I0515 23:57:49.347093 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d565abe5-e175-4a91-9731-17c69ea9a9a8-kube-proxy\") pod \"kube-proxy-p8zqg\" (UID: \"d565abe5-e175-4a91-9731-17c69ea9a9a8\") " pod="kube-system/kube-proxy-p8zqg" May 15 23:57:49.347364 kubelet[2837]: I0515 23:57:49.347161 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-868vr\" (UniqueName: \"kubernetes.io/projected/d565abe5-e175-4a91-9731-17c69ea9a9a8-kube-api-access-868vr\") pod \"kube-proxy-p8zqg\" (UID: \"d565abe5-e175-4a91-9731-17c69ea9a9a8\") " pod="kube-system/kube-proxy-p8zqg" May 15 23:57:49.347364 kubelet[2837]: I0515 23:57:49.347181 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddf24b30-b9c0-4ac7-beaa-0760584a6072-hubble-tls\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.347364 kubelet[2837]: I0515 23:57:49.347200 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d565abe5-e175-4a91-9731-17c69ea9a9a8-lib-modules\") pod \"kube-proxy-p8zqg\" (UID: \"d565abe5-e175-4a91-9731-17c69ea9a9a8\") " pod="kube-system/kube-proxy-p8zqg" May 15 23:57:49.347364 kubelet[2837]: I0515 23:57:49.347214 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-hostproc\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.347364 kubelet[2837]: I0515 23:57:49.347227 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddf24b30-b9c0-4ac7-beaa-0760584a6072-clustermesh-secrets\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.349240 kubelet[2837]: I0515 23:57:49.347241 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-host-proc-sys-net\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.349240 kubelet[2837]: I0515 23:57:49.347281 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-lib-modules\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.349240 kubelet[2837]: I0515 23:57:49.347314 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-xtables-lock\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.349240 kubelet[2837]: I0515 23:57:49.347329 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d565abe5-e175-4a91-9731-17c69ea9a9a8-xtables-lock\") pod \"kube-proxy-p8zqg\" (UID: \"d565abe5-e175-4a91-9731-17c69ea9a9a8\") " pod="kube-system/kube-proxy-p8zqg" May 15 23:57:49.349240 kubelet[2837]: I0515 23:57:49.347342 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-bpf-maps\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.349240 kubelet[2837]: I0515 23:57:49.347358 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cilium-config-path\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.349388 kubelet[2837]: I0515 23:57:49.347372 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cilium-run\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.349388 kubelet[2837]: I0515 23:57:49.347385 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cilium-cgroup\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.349388 kubelet[2837]: I0515 23:57:49.347399 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cni-path\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.349388 kubelet[2837]: I0515 23:57:49.347413 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-host-proc-sys-kernel\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.349388 kubelet[2837]: I0515 23:57:49.347426 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-etc-cni-netd\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.349388 kubelet[2837]: I0515 23:57:49.347453 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztvch\" (UniqueName: \"kubernetes.io/projected/ddf24b30-b9c0-4ac7-beaa-0760584a6072-kube-api-access-ztvch\") pod \"cilium-wtzt7\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " pod="kube-system/cilium-wtzt7" May 15 23:57:49.448985 kubelet[2837]: I0515 23:57:49.447888 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2vh\" (UniqueName: \"kubernetes.io/projected/982621c9-3a35-4097-ae32-aed0ffd937f2-kube-api-access-rj2vh\") pod \"cilium-operator-5d85765b45-gj4h9\" (UID: \"982621c9-3a35-4097-ae32-aed0ffd937f2\") " pod="kube-system/cilium-operator-5d85765b45-gj4h9" May 15 23:57:49.448985 kubelet[2837]: I0515 23:57:49.447984 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/982621c9-3a35-4097-ae32-aed0ffd937f2-cilium-config-path\") pod \"cilium-operator-5d85765b45-gj4h9\" (UID: \"982621c9-3a35-4097-ae32-aed0ffd937f2\") " pod="kube-system/cilium-operator-5d85765b45-gj4h9" May 15 23:57:49.591986 kubelet[2837]: E0515 23:57:49.591915 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:49.592815 containerd[1586]: time="2025-05-15T23:57:49.592598589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p8zqg,Uid:d565abe5-e175-4a91-9731-17c69ea9a9a8,Namespace:kube-system,Attempt:0,}" May 15 23:57:49.602604 kubelet[2837]: E0515 23:57:49.602472 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:49.604496 containerd[1586]: time="2025-05-15T23:57:49.602977106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wtzt7,Uid:ddf24b30-b9c0-4ac7-beaa-0760584a6072,Namespace:kube-system,Attempt:0,}" May 15 23:57:49.630523 containerd[1586]: time="2025-05-15T23:57:49.630416912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:57:49.630630 containerd[1586]: time="2025-05-15T23:57:49.630519164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:57:49.630630 containerd[1586]: time="2025-05-15T23:57:49.630542849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:57:49.630715 containerd[1586]: time="2025-05-15T23:57:49.630656592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:57:49.641195 containerd[1586]: time="2025-05-15T23:57:49.640997208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:57:49.641195 containerd[1586]: time="2025-05-15T23:57:49.641091154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:57:49.641195 containerd[1586]: time="2025-05-15T23:57:49.641104730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:57:49.641509 containerd[1586]: time="2025-05-15T23:57:49.641224374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:57:49.654070 kubelet[2837]: E0515 23:57:49.654026 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:49.655476 containerd[1586]: time="2025-05-15T23:57:49.655071391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gj4h9,Uid:982621c9-3a35-4097-ae32-aed0ffd937f2,Namespace:kube-system,Attempt:0,}" May 15 23:57:49.684676 containerd[1586]: time="2025-05-15T23:57:49.684631236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p8zqg,Uid:d565abe5-e175-4a91-9731-17c69ea9a9a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d594d540743a6ae84a2202f0a13ae69a7d9c5fa664fec1328ac54d85dfc2623\"" May 15 23:57:49.686102 kubelet[2837]: E0515 23:57:49.686073 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:49.693122 containerd[1586]: time="2025-05-15T23:57:49.693028505Z" level=info msg="CreateContainer within sandbox \"6d594d540743a6ae84a2202f0a13ae69a7d9c5fa664fec1328ac54d85dfc2623\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 23:57:49.694055 containerd[1586]: time="2025-05-15T23:57:49.694023442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wtzt7,Uid:ddf24b30-b9c0-4ac7-beaa-0760584a6072,Namespace:kube-system,Attempt:0,} returns sandbox id \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\"" May 15 23:57:49.694794 kubelet[2837]: E0515 23:57:49.694744 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:49.695910 containerd[1586]: time="2025-05-15T23:57:49.695845712Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 23:57:49.698926 containerd[1586]: time="2025-05-15T23:57:49.698682655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:57:49.698926 containerd[1586]: time="2025-05-15T23:57:49.698795097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:57:49.698926 containerd[1586]: time="2025-05-15T23:57:49.698824222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:57:49.700172 containerd[1586]: time="2025-05-15T23:57:49.700038530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:57:49.718723 containerd[1586]: time="2025-05-15T23:57:49.718653354Z" level=info msg="CreateContainer within sandbox \"6d594d540743a6ae84a2202f0a13ae69a7d9c5fa664fec1328ac54d85dfc2623\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"102069ccfb56760aa9ea450e444e381282328ccc9d2e40770f70ea636f5dce25\"" May 15 23:57:49.719595 containerd[1586]: time="2025-05-15T23:57:49.719342137Z" level=info msg="StartContainer for \"102069ccfb56760aa9ea450e444e381282328ccc9d2e40770f70ea636f5dce25\"" May 15 23:57:49.780792 containerd[1586]: time="2025-05-15T23:57:49.780714854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gj4h9,Uid:982621c9-3a35-4097-ae32-aed0ffd937f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\"" May 15 23:57:49.781444 kubelet[2837]: E0515 23:57:49.781416 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:49.803180 containerd[1586]: time="2025-05-15T23:57:49.803126424Z" level=info msg="StartContainer for \"102069ccfb56760aa9ea450e444e381282328ccc9d2e40770f70ea636f5dce25\" returns successfully" May 15 23:57:50.162079 kubelet[2837]: E0515 23:57:50.162050 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:50.171292 kubelet[2837]: I0515 23:57:50.171213 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p8zqg" podStartSLOduration=1.171193339 podStartE2EDuration="1.171193339s" podCreationTimestamp="2025-05-15 23:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:57:50.1711352 +0000 UTC m=+6.135046517" watchObservedRunningTime="2025-05-15 23:57:50.171193339 +0000 UTC m=+6.135104616" May 15 23:57:51.981577 kubelet[2837]: E0515 23:57:51.981525 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:52.168664 kubelet[2837]: E0515 23:57:52.168625 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:54.065781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2374352156.mount: Deactivated successfully. May 15 23:57:55.819101 systemd-resolved[1471]: Under memory pressure, flushing caches. May 15 23:57:55.820881 systemd-journald[1166]: Under memory pressure, flushing caches. May 15 23:57:55.819149 systemd-resolved[1471]: Flushed all caches. May 15 23:57:57.812391 containerd[1586]: time="2025-05-15T23:57:57.812302889Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:57:57.813712 containerd[1586]: time="2025-05-15T23:57:57.813666673Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 15 23:57:57.816483 containerd[1586]: time="2025-05-15T23:57:57.816371247Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:57:57.818474 containerd[1586]: time="2025-05-15T23:57:57.818424137Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.122529372s" May 15 23:57:57.818474 containerd[1586]: time="2025-05-15T23:57:57.818468824Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 23:57:57.824265 containerd[1586]: time="2025-05-15T23:57:57.824220495Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 23:57:57.833397 containerd[1586]: time="2025-05-15T23:57:57.833340156Z" level=info msg="CreateContainer within sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:57:57.856770 containerd[1586]: time="2025-05-15T23:57:57.856699386Z" level=info msg="CreateContainer within sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848\"" May 15 23:57:57.857362 containerd[1586]: time="2025-05-15T23:57:57.857328806Z" level=info msg="StartContainer for \"3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848\"" May 15 23:57:57.866983 systemd-resolved[1471]: Under memory pressure, flushing caches. May 15 23:57:57.867015 systemd-resolved[1471]: Flushed all caches. May 15 23:57:57.868882 systemd-journald[1166]: Under memory pressure, flushing caches. May 15 23:57:57.931788 containerd[1586]: time="2025-05-15T23:57:57.931746709Z" level=info msg="StartContainer for \"3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848\" returns successfully" May 15 23:57:58.558796 kubelet[2837]: E0515 23:57:58.558368 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:58.588986 containerd[1586]: time="2025-05-15T23:57:58.588898211Z" level=info msg="shim disconnected" id=3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848 namespace=k8s.io May 15 23:57:58.588986 containerd[1586]: time="2025-05-15T23:57:58.588961914Z" level=warning msg="cleaning up after shim disconnected" id=3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848 namespace=k8s.io May 15 23:57:58.588986 containerd[1586]: time="2025-05-15T23:57:58.588972986Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:57:58.847612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848-rootfs.mount: Deactivated successfully. May 15 23:57:59.549444 kubelet[2837]: E0515 23:57:59.549249 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:57:59.554193 containerd[1586]: time="2025-05-15T23:57:59.554123576Z" level=info msg="CreateContainer within sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:57:59.589657 containerd[1586]: time="2025-05-15T23:57:59.589590829Z" level=info msg="CreateContainer within sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6\"" May 15 23:57:59.590216 containerd[1586]: time="2025-05-15T23:57:59.590162235Z" level=info msg="StartContainer for \"80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6\"" May 15 23:57:59.656429 containerd[1586]: time="2025-05-15T23:57:59.656362468Z" level=info msg="StartContainer for \"80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6\" returns successfully" May 15 23:57:59.670735 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:57:59.671337 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:57:59.671440 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 23:57:59.683252 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:57:59.730724 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:57:59.735878 containerd[1586]: time="2025-05-15T23:57:59.735782947Z" level=info msg="shim disconnected" id=80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6 namespace=k8s.io May 15 23:57:59.736070 containerd[1586]: time="2025-05-15T23:57:59.735885575Z" level=warning msg="cleaning up after shim disconnected" id=80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6 namespace=k8s.io May 15 23:57:59.736070 containerd[1586]: time="2025-05-15T23:57:59.735902528Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:57:59.848623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6-rootfs.mount: Deactivated successfully. May 15 23:58:00.552568 kubelet[2837]: E0515 23:58:00.552497 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:00.554333 containerd[1586]: time="2025-05-15T23:58:00.554294552Z" level=info msg="CreateContainer within sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:58:00.742523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695485084.mount: Deactivated successfully. May 15 23:58:01.140525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044781222.mount: Deactivated successfully. May 15 23:58:01.153664 containerd[1586]: time="2025-05-15T23:58:01.153614229Z" level=info msg="CreateContainer within sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05\"" May 15 23:58:01.154389 containerd[1586]: time="2025-05-15T23:58:01.154349550Z" level=info msg="StartContainer for \"7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05\"" May 15 23:58:01.242414 containerd[1586]: time="2025-05-15T23:58:01.239669923Z" level=info msg="StartContainer for \"7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05\" returns successfully" May 15 23:58:01.271070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05-rootfs.mount: Deactivated successfully. May 15 23:58:01.554950 kubelet[2837]: E0515 23:58:01.554808 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:01.608266 containerd[1586]: time="2025-05-15T23:58:01.608188086Z" level=info msg="shim disconnected" id=7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05 namespace=k8s.io May 15 23:58:01.608266 containerd[1586]: time="2025-05-15T23:58:01.608245557Z" level=warning msg="cleaning up after shim disconnected" id=7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05 namespace=k8s.io May 15 23:58:01.608266 containerd[1586]: time="2025-05-15T23:58:01.608253573Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:58:01.785496 containerd[1586]: time="2025-05-15T23:58:01.785428973Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:01.786633 containerd[1586]: time="2025-05-15T23:58:01.786570449Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 15 23:58:01.787978 containerd[1586]: time="2025-05-15T23:58:01.787947129Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:01.789472 containerd[1586]: time="2025-05-15T23:58:01.789426537Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.965165433s" May 15 23:58:01.789472 containerd[1586]: time="2025-05-15T23:58:01.789466184Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 23:58:01.791373 containerd[1586]: time="2025-05-15T23:58:01.791334543Z" level=info msg="CreateContainer within sandbox \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 23:58:01.804733 containerd[1586]: time="2025-05-15T23:58:01.804692444Z" level=info msg="CreateContainer within sandbox \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\"" May 15 23:58:01.805200 containerd[1586]: time="2025-05-15T23:58:01.805128916Z" level=info msg="StartContainer for \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\"" May 15 23:58:01.865673 containerd[1586]: time="2025-05-15T23:58:01.865542045Z" level=info msg="StartContainer for \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\" returns successfully" May 15 23:58:02.558587 kubelet[2837]: E0515 23:58:02.558530 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:02.563721 kubelet[2837]: E0515 23:58:02.563690 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:02.566136 containerd[1586]: time="2025-05-15T23:58:02.566095442Z" level=info msg="CreateContainer within sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:58:02.593263 containerd[1586]: time="2025-05-15T23:58:02.593211012Z" level=info msg="CreateContainer within sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017\"" May 15 23:58:02.598886 containerd[1586]: time="2025-05-15T23:58:02.597928549Z" level=info msg="StartContainer for \"a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017\"" May 15 23:58:02.609796 kubelet[2837]: I0515 23:58:02.609742 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-gj4h9" podStartSLOduration=1.601544751 podStartE2EDuration="13.60972314s" podCreationTimestamp="2025-05-15 23:57:49 +0000 UTC" firstStartedPulling="2025-05-15 23:57:49.781909115 +0000 UTC m=+5.745820402" lastFinishedPulling="2025-05-15 23:58:01.790087504 +0000 UTC m=+17.753998791" observedRunningTime="2025-05-15 23:58:02.577603089 +0000 UTC m=+18.541514396" watchObservedRunningTime="2025-05-15 23:58:02.60972314 +0000 UTC m=+18.573634437" May 15 23:58:02.675392 containerd[1586]: time="2025-05-15T23:58:02.675351343Z" level=info msg="StartContainer for \"a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017\" returns successfully" May 15 23:58:02.699550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017-rootfs.mount: Deactivated successfully. May 15 23:58:03.021351 containerd[1586]: time="2025-05-15T23:58:03.021285690Z" level=info msg="shim disconnected" id=a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017 namespace=k8s.io May 15 23:58:03.021351 containerd[1586]: time="2025-05-15T23:58:03.021345184Z" level=warning msg="cleaning up after shim disconnected" id=a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017 namespace=k8s.io May 15 23:58:03.021351 containerd[1586]: time="2025-05-15T23:58:03.021353630Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:58:03.598578 kubelet[2837]: E0515 23:58:03.598482 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:03.598578 kubelet[2837]: E0515 23:58:03.598551 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:03.600915 containerd[1586]: time="2025-05-15T23:58:03.600729289Z" level=info msg="CreateContainer within sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:58:03.776951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount786451150.mount: Deactivated successfully. May 15 23:58:03.846480 containerd[1586]: time="2025-05-15T23:58:03.846404537Z" level=info msg="CreateContainer within sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\"" May 15 23:58:03.847083 containerd[1586]: time="2025-05-15T23:58:03.847037779Z" level=info msg="StartContainer for \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\"" May 15 23:58:03.939343 containerd[1586]: time="2025-05-15T23:58:03.938928567Z" level=info msg="StartContainer for \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\" returns successfully" May 15 23:58:04.163102 kubelet[2837]: I0515 23:58:04.162951 2837 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 23:58:04.401912 kubelet[2837]: I0515 23:58:04.401838 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r9kd\" (UniqueName: \"kubernetes.io/projected/3e75f841-c46f-4831-8f2c-d346d23f52ee-kube-api-access-4r9kd\") pod \"coredns-7c65d6cfc9-n99d9\" (UID: \"3e75f841-c46f-4831-8f2c-d346d23f52ee\") " pod="kube-system/coredns-7c65d6cfc9-n99d9" May 15 23:58:04.401912 kubelet[2837]: I0515 23:58:04.401917 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e75f841-c46f-4831-8f2c-d346d23f52ee-config-volume\") pod \"coredns-7c65d6cfc9-n99d9\" (UID: \"3e75f841-c46f-4831-8f2c-d346d23f52ee\") " pod="kube-system/coredns-7c65d6cfc9-n99d9" May 15 23:58:04.502819 kubelet[2837]: I0515 23:58:04.502718 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80115612-719f-483b-8bf5-50a100712878-config-volume\") pod \"coredns-7c65d6cfc9-7jrlc\" (UID: \"80115612-719f-483b-8bf5-50a100712878\") " pod="kube-system/coredns-7c65d6cfc9-7jrlc" May 15 23:58:04.502819 kubelet[2837]: I0515 23:58:04.502797 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhccm\" (UniqueName: \"kubernetes.io/projected/80115612-719f-483b-8bf5-50a100712878-kube-api-access-mhccm\") pod \"coredns-7c65d6cfc9-7jrlc\" (UID: \"80115612-719f-483b-8bf5-50a100712878\") " pod="kube-system/coredns-7c65d6cfc9-7jrlc" May 15 23:58:04.581340 kubelet[2837]: E0515 23:58:04.581189 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:04.582579 containerd[1586]: time="2025-05-15T23:58:04.582204038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n99d9,Uid:3e75f841-c46f-4831-8f2c-d346d23f52ee,Namespace:kube-system,Attempt:0,}" May 15 23:58:04.596559 kubelet[2837]: E0515 23:58:04.596521 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:04.621274 kubelet[2837]: E0515 23:58:04.620526 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:04.621929 containerd[1586]: time="2025-05-15T23:58:04.621765410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7jrlc,Uid:80115612-719f-483b-8bf5-50a100712878,Namespace:kube-system,Attempt:0,}" May 15 23:58:05.598428 kubelet[2837]: E0515 23:58:05.598386 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:06.197259 systemd-networkd[1252]: cilium_host: Link UP May 15 23:58:06.197471 systemd-networkd[1252]: cilium_net: Link UP May 15 23:58:06.197475 systemd-networkd[1252]: cilium_net: Gained carrier May 15 23:58:06.197713 systemd-networkd[1252]: cilium_host: Gained carrier May 15 23:58:06.198038 systemd-networkd[1252]: cilium_host: Gained IPv6LL May 15 23:58:06.332073 systemd-networkd[1252]: cilium_vxlan: Link UP May 15 23:58:06.332090 systemd-networkd[1252]: cilium_vxlan: Gained carrier May 15 23:58:06.590893 kernel: NET: Registered PF_ALG protocol family May 15 23:58:06.600461 kubelet[2837]: E0515 23:58:06.600276 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:06.610953 systemd-networkd[1252]: cilium_net: Gained IPv6LL May 15 23:58:07.347933 systemd-networkd[1252]: lxc_health: Link UP May 15 23:58:07.357037 systemd-networkd[1252]: lxc_health: Gained carrier May 15 23:58:07.604230 kubelet[2837]: E0515 23:58:07.604102 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:07.625818 kubelet[2837]: I0515 23:58:07.625752 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wtzt7" podStartSLOduration=10.498742338 podStartE2EDuration="18.625735426s" podCreationTimestamp="2025-05-15 23:57:49 +0000 UTC" firstStartedPulling="2025-05-15 23:57:49.695411526 +0000 UTC m=+5.659322814" lastFinishedPulling="2025-05-15 23:57:57.822404615 +0000 UTC m=+13.786315902" observedRunningTime="2025-05-15 23:58:04.622656988 +0000 UTC m=+20.586568276" watchObservedRunningTime="2025-05-15 23:58:07.625735426 +0000 UTC m=+23.589646713" May 15 23:58:07.673016 systemd-networkd[1252]: lxcc9374641ebf3: Link UP May 15 23:58:07.687884 kernel: eth0: renamed from tmp00921 May 15 23:58:07.698226 systemd-networkd[1252]: lxcc9374641ebf3: Gained carrier May 15 23:58:07.853536 systemd-networkd[1252]: lxc952e7b8bbef2: Link UP May 15 23:58:07.866888 kernel: eth0: renamed from tmp5c488 May 15 23:58:07.873474 systemd-networkd[1252]: lxc952e7b8bbef2: Gained carrier May 15 23:58:08.298081 systemd-networkd[1252]: cilium_vxlan: Gained IPv6LL May 15 23:58:08.604381 kubelet[2837]: E0515 23:58:08.604311 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:09.322239 systemd-networkd[1252]: lxc952e7b8bbef2: Gained IPv6LL May 15 23:58:09.386165 systemd-networkd[1252]: lxc_health: Gained IPv6LL May 15 23:58:09.606186 kubelet[2837]: E0515 23:58:09.606132 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:09.649294 systemd-networkd[1252]: lxcc9374641ebf3: Gained IPv6LL May 15 23:58:11.825755 containerd[1586]: time="2025-05-15T23:58:11.825636500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:58:11.825755 containerd[1586]: time="2025-05-15T23:58:11.825717065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:58:11.825755 containerd[1586]: time="2025-05-15T23:58:11.825731292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:58:11.826447 containerd[1586]: time="2025-05-15T23:58:11.825834941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:58:11.839247 containerd[1586]: time="2025-05-15T23:58:11.839149709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:58:11.839247 containerd[1586]: time="2025-05-15T23:58:11.839220875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:58:11.839247 containerd[1586]: time="2025-05-15T23:58:11.839235394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:58:11.840350 containerd[1586]: time="2025-05-15T23:58:11.840208932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:58:11.880834 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:58:11.881659 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:58:11.915671 containerd[1586]: time="2025-05-15T23:58:11.915608455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7jrlc,Uid:80115612-719f-483b-8bf5-50a100712878,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c488b1c6095a8e1bad72e57953a5863155fbcd107c22b615e97c4c1f64859eb\"" May 15 23:58:11.916535 kubelet[2837]: E0515 23:58:11.916500 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:11.917066 containerd[1586]: time="2025-05-15T23:58:11.916763241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n99d9,Uid:3e75f841-c46f-4831-8f2c-d346d23f52ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"00921251ece1837314ecbba9b2a433db849d0c3f7fd10030d2f14602326252e1\"" May 15 23:58:11.918719 kubelet[2837]: E0515 23:58:11.918676 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:11.921108 containerd[1586]: time="2025-05-15T23:58:11.921069423Z" level=info msg="CreateContainer within sandbox \"00921251ece1837314ecbba9b2a433db849d0c3f7fd10030d2f14602326252e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:58:11.936815 containerd[1586]: time="2025-05-15T23:58:11.936744751Z" level=info msg="CreateContainer within sandbox \"5c488b1c6095a8e1bad72e57953a5863155fbcd107c22b615e97c4c1f64859eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:58:11.956579 containerd[1586]: time="2025-05-15T23:58:11.956524173Z" level=info msg="CreateContainer within sandbox \"00921251ece1837314ecbba9b2a433db849d0c3f7fd10030d2f14602326252e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"048e6c0911b5bc911c7b7c2bc8f39b91e8e5cf9ef2e165bd1b020eeaf44e446d\"" May 15 23:58:11.960692 containerd[1586]: time="2025-05-15T23:58:11.960640611Z" level=info msg="StartContainer for \"048e6c0911b5bc911c7b7c2bc8f39b91e8e5cf9ef2e165bd1b020eeaf44e446d\"" May 15 23:58:11.968961 containerd[1586]: time="2025-05-15T23:58:11.968458201Z" level=info msg="CreateContainer within sandbox \"5c488b1c6095a8e1bad72e57953a5863155fbcd107c22b615e97c4c1f64859eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"391db40a6a0b082885163d702a6a53316eb75c081888e63c35a9e8f7032c9e80\"" May 15 23:58:11.970040 containerd[1586]: time="2025-05-15T23:58:11.969585294Z" level=info msg="StartContainer for \"391db40a6a0b082885163d702a6a53316eb75c081888e63c35a9e8f7032c9e80\"" May 15 23:58:12.042570 containerd[1586]: time="2025-05-15T23:58:12.042494223Z" level=info msg="StartContainer for \"048e6c0911b5bc911c7b7c2bc8f39b91e8e5cf9ef2e165bd1b020eeaf44e446d\" returns successfully" May 15 23:58:12.053241 containerd[1586]: time="2025-05-15T23:58:12.053188180Z" level=info msg="StartContainer for \"391db40a6a0b082885163d702a6a53316eb75c081888e63c35a9e8f7032c9e80\" returns successfully" May 15 23:58:12.078402 systemd[1]: Started sshd@9-10.0.0.111:22-10.0.0.1:42740.service - OpenSSH per-connection server daemon (10.0.0.1:42740). May 15 23:58:12.129932 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 42740 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:12.132349 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:12.140110 systemd-logind[1570]: New session 10 of user core. May 15 23:58:12.150397 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 23:58:12.428582 sshd[4212]: Connection closed by 10.0.0.1 port 42740 May 15 23:58:12.428921 sshd-session[4201]: pam_unix(sshd:session): session closed for user core May 15 23:58:12.432760 systemd[1]: sshd@9-10.0.0.111:22-10.0.0.1:42740.service: Deactivated successfully. May 15 23:58:12.435240 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. May 15 23:58:12.435336 systemd[1]: session-10.scope: Deactivated successfully. May 15 23:58:12.436703 systemd-logind[1570]: Removed session 10. May 15 23:58:12.613491 kubelet[2837]: E0515 23:58:12.613111 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:12.617403 kubelet[2837]: E0515 23:58:12.617375 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:12.625274 kubelet[2837]: I0515 23:58:12.624770 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7jrlc" podStartSLOduration=23.624753859 podStartE2EDuration="23.624753859s" podCreationTimestamp="2025-05-15 23:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:58:12.624393158 +0000 UTC m=+28.588304465" watchObservedRunningTime="2025-05-15 23:58:12.624753859 +0000 UTC m=+28.588665156" May 15 23:58:12.636170 kubelet[2837]: I0515 23:58:12.636041 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-n99d9" podStartSLOduration=23.636014223 podStartE2EDuration="23.636014223s" podCreationTimestamp="2025-05-15 23:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:58:12.635347204 +0000 UTC m=+28.599258511" watchObservedRunningTime="2025-05-15 23:58:12.636014223 +0000 UTC m=+28.599925510" May 15 23:58:13.621967 kubelet[2837]: E0515 23:58:13.621112 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:13.621967 kubelet[2837]: E0515 23:58:13.621253 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:14.625523 kubelet[2837]: E0515 23:58:14.624480 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:14.625523 kubelet[2837]: E0515 23:58:14.624822 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:17.442270 systemd[1]: Started sshd@10-10.0.0.111:22-10.0.0.1:42826.service - OpenSSH per-connection server daemon (10.0.0.1:42826). May 15 23:58:17.561003 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 42826 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:17.562654 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:17.598605 systemd-logind[1570]: New session 11 of user core. May 15 23:58:17.613251 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 23:58:17.758997 sshd[4245]: Connection closed by 10.0.0.1 port 42826 May 15 23:58:17.759293 sshd-session[4242]: pam_unix(sshd:session): session closed for user core May 15 23:58:17.764838 systemd[1]: sshd@10-10.0.0.111:22-10.0.0.1:42826.service: Deactivated successfully. May 15 23:58:17.767722 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. May 15 23:58:17.767806 systemd[1]: session-11.scope: Deactivated successfully. May 15 23:58:17.769300 systemd-logind[1570]: Removed session 11. May 15 23:58:22.774388 systemd[1]: Started sshd@11-10.0.0.111:22-10.0.0.1:44584.service - OpenSSH per-connection server daemon (10.0.0.1:44584). May 15 23:58:22.851572 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 44584 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:22.854057 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:22.860327 systemd-logind[1570]: New session 12 of user core. May 15 23:58:22.870344 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 23:58:22.997943 sshd[4264]: Connection closed by 10.0.0.1 port 44584 May 15 23:58:22.998311 sshd-session[4261]: pam_unix(sshd:session): session closed for user core May 15 23:58:23.003316 systemd[1]: sshd@11-10.0.0.111:22-10.0.0.1:44584.service: Deactivated successfully. May 15 23:58:23.006210 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. May 15 23:58:23.006279 systemd[1]: session-12.scope: Deactivated successfully. May 15 23:58:23.007595 systemd-logind[1570]: Removed session 12. May 15 23:58:28.017172 systemd[1]: Started sshd@12-10.0.0.111:22-10.0.0.1:42018.service - OpenSSH per-connection server daemon (10.0.0.1:42018). May 15 23:58:28.056030 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 42018 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:28.081741 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:28.086343 systemd-logind[1570]: New session 13 of user core. May 15 23:58:28.100338 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 23:58:28.237257 sshd[4280]: Connection closed by 10.0.0.1 port 42018 May 15 23:58:28.237595 sshd-session[4277]: pam_unix(sshd:session): session closed for user core May 15 23:58:28.241695 systemd[1]: sshd@12-10.0.0.111:22-10.0.0.1:42018.service: Deactivated successfully. May 15 23:58:28.244422 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. May 15 23:58:28.244605 systemd[1]: session-13.scope: Deactivated successfully. May 15 23:58:28.246099 systemd-logind[1570]: Removed session 13. May 15 23:58:33.254225 systemd[1]: Started sshd@13-10.0.0.111:22-10.0.0.1:42032.service - OpenSSH per-connection server daemon (10.0.0.1:42032). May 15 23:58:33.294847 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 42032 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:33.296474 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:33.300585 systemd-logind[1570]: New session 14 of user core. May 15 23:58:33.307115 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 23:58:33.430907 sshd[4297]: Connection closed by 10.0.0.1 port 42032 May 15 23:58:33.431240 sshd-session[4294]: pam_unix(sshd:session): session closed for user core May 15 23:58:33.435027 systemd[1]: sshd@13-10.0.0.111:22-10.0.0.1:42032.service: Deactivated successfully. May 15 23:58:33.437402 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. May 15 23:58:33.437505 systemd[1]: session-14.scope: Deactivated successfully. May 15 23:58:33.438532 systemd-logind[1570]: Removed session 14. May 15 23:58:38.443246 systemd[1]: Started sshd@14-10.0.0.111:22-10.0.0.1:42206.service - OpenSSH per-connection server daemon (10.0.0.1:42206). May 15 23:58:38.491210 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 42206 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:38.493446 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:38.503278 systemd-logind[1570]: New session 15 of user core. May 15 23:58:38.516659 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 23:58:38.700497 sshd[4313]: Connection closed by 10.0.0.1 port 42206 May 15 23:58:38.700834 sshd-session[4310]: pam_unix(sshd:session): session closed for user core May 15 23:58:38.705315 systemd[1]: sshd@14-10.0.0.111:22-10.0.0.1:42206.service: Deactivated successfully. May 15 23:58:38.708310 systemd[1]: session-15.scope: Deactivated successfully. May 15 23:58:38.708360 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. May 15 23:58:38.709989 systemd-logind[1570]: Removed session 15. May 15 23:58:43.728536 systemd[1]: Started sshd@15-10.0.0.111:22-10.0.0.1:42214.service - OpenSSH per-connection server daemon (10.0.0.1:42214). May 15 23:58:43.784429 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 42214 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:43.787130 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:43.794960 systemd-logind[1570]: New session 16 of user core. May 15 23:58:43.805484 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 23:58:43.994465 sshd[4332]: Connection closed by 10.0.0.1 port 42214 May 15 23:58:43.995226 sshd-session[4329]: pam_unix(sshd:session): session closed for user core May 15 23:58:44.008195 systemd[1]: Started sshd@16-10.0.0.111:22-10.0.0.1:42222.service - OpenSSH per-connection server daemon (10.0.0.1:42222). May 15 23:58:44.008944 systemd[1]: sshd@15-10.0.0.111:22-10.0.0.1:42214.service: Deactivated successfully. May 15 23:58:44.015530 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. May 15 23:58:44.015942 systemd[1]: session-16.scope: Deactivated successfully. May 15 23:58:44.018690 systemd-logind[1570]: Removed session 16. May 15 23:58:44.085175 sshd[4342]: Accepted publickey for core from 10.0.0.1 port 42222 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:44.087891 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:44.096692 systemd-logind[1570]: New session 17 of user core. May 15 23:58:44.109498 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 23:58:44.334755 sshd[4348]: Connection closed by 10.0.0.1 port 42222 May 15 23:58:44.335050 sshd-session[4342]: pam_unix(sshd:session): session closed for user core May 15 23:58:44.342183 systemd[1]: Started sshd@17-10.0.0.111:22-10.0.0.1:42236.service - OpenSSH per-connection server daemon (10.0.0.1:42236). May 15 23:58:44.342824 systemd[1]: sshd@16-10.0.0.111:22-10.0.0.1:42222.service: Deactivated successfully. May 15 23:58:44.346281 systemd[1]: session-17.scope: Deactivated successfully. May 15 23:58:44.348199 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. May 15 23:58:44.349349 systemd-logind[1570]: Removed session 17. May 15 23:58:44.391407 sshd[4357]: Accepted publickey for core from 10.0.0.1 port 42236 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:44.394155 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:44.400082 systemd-logind[1570]: New session 18 of user core. May 15 23:58:44.410578 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 23:58:44.587261 sshd[4363]: Connection closed by 10.0.0.1 port 42236 May 15 23:58:44.591186 sshd-session[4357]: pam_unix(sshd:session): session closed for user core May 15 23:58:44.599486 systemd[1]: sshd@17-10.0.0.111:22-10.0.0.1:42236.service: Deactivated successfully. May 15 23:58:44.602528 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. May 15 23:58:44.602609 systemd[1]: session-18.scope: Deactivated successfully. May 15 23:58:44.604402 systemd-logind[1570]: Removed session 18. May 15 23:58:49.604322 systemd[1]: Started sshd@18-10.0.0.111:22-10.0.0.1:53018.service - OpenSSH per-connection server daemon (10.0.0.1:53018). May 15 23:58:49.701719 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 53018 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:49.703935 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:49.718602 systemd-logind[1570]: New session 19 of user core. May 15 23:58:49.735233 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 23:58:50.018152 sshd[4379]: Connection closed by 10.0.0.1 port 53018 May 15 23:58:50.019436 sshd-session[4376]: pam_unix(sshd:session): session closed for user core May 15 23:58:50.026485 systemd[1]: sshd@18-10.0.0.111:22-10.0.0.1:53018.service: Deactivated successfully. May 15 23:58:50.027139 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. May 15 23:58:50.031307 systemd[1]: session-19.scope: Deactivated successfully. May 15 23:58:50.032209 systemd-logind[1570]: Removed session 19. May 15 23:58:52.131378 kubelet[2837]: E0515 23:58:52.131316 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:58:55.038349 systemd[1]: Started sshd@19-10.0.0.111:22-10.0.0.1:53020.service - OpenSSH per-connection server daemon (10.0.0.1:53020). May 15 23:58:55.090529 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 53020 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:55.093537 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:55.106694 systemd-logind[1570]: New session 20 of user core. May 15 23:58:55.118639 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 23:58:55.274892 sshd[4397]: Connection closed by 10.0.0.1 port 53020 May 15 23:58:55.275352 sshd-session[4394]: pam_unix(sshd:session): session closed for user core May 15 23:58:55.279420 systemd[1]: sshd@19-10.0.0.111:22-10.0.0.1:53020.service: Deactivated successfully. May 15 23:58:55.283764 systemd[1]: session-20.scope: Deactivated successfully. May 15 23:58:55.284737 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. May 15 23:58:55.286139 systemd-logind[1570]: Removed session 20. May 15 23:59:00.300326 systemd[1]: Started sshd@20-10.0.0.111:22-10.0.0.1:49598.service - OpenSSH per-connection server daemon (10.0.0.1:49598). May 15 23:59:00.373358 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 49598 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:00.379328 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:00.400866 systemd-logind[1570]: New session 21 of user core. May 15 23:59:00.430609 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 23:59:00.595293 sshd[4412]: Connection closed by 10.0.0.1 port 49598 May 15 23:59:00.596176 sshd-session[4409]: pam_unix(sshd:session): session closed for user core May 15 23:59:00.605305 systemd[1]: Started sshd@21-10.0.0.111:22-10.0.0.1:49602.service - OpenSSH per-connection server daemon (10.0.0.1:49602). May 15 23:59:00.606080 systemd[1]: sshd@20-10.0.0.111:22-10.0.0.1:49598.service: Deactivated successfully. May 15 23:59:00.611681 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. May 15 23:59:00.611797 systemd[1]: session-21.scope: Deactivated successfully. May 15 23:59:00.613483 systemd-logind[1570]: Removed session 21. May 15 23:59:00.655746 sshd[4422]: Accepted publickey for core from 10.0.0.1 port 49602 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:00.657791 sshd-session[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:00.662638 systemd-logind[1570]: New session 22 of user core. May 15 23:59:00.677398 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 23:59:01.131327 kubelet[2837]: E0515 23:59:01.131112 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:01.272041 sshd[4428]: Connection closed by 10.0.0.1 port 49602 May 15 23:59:01.272659 sshd-session[4422]: pam_unix(sshd:session): session closed for user core May 15 23:59:01.286424 systemd[1]: Started sshd@22-10.0.0.111:22-10.0.0.1:49616.service - OpenSSH per-connection server daemon (10.0.0.1:49616). May 15 23:59:01.287953 systemd[1]: sshd@21-10.0.0.111:22-10.0.0.1:49602.service: Deactivated successfully. May 15 23:59:01.294362 systemd[1]: session-22.scope: Deactivated successfully. May 15 23:59:01.296462 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. May 15 23:59:01.299721 systemd-logind[1570]: Removed session 22. May 15 23:59:01.345617 sshd[4435]: Accepted publickey for core from 10.0.0.1 port 49616 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:01.348016 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:01.357306 systemd-logind[1570]: New session 23 of user core. May 15 23:59:01.367510 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 23:59:03.097379 sshd[4441]: Connection closed by 10.0.0.1 port 49616 May 15 23:59:03.098671 sshd-session[4435]: pam_unix(sshd:session): session closed for user core May 15 23:59:03.112382 systemd[1]: Started sshd@23-10.0.0.111:22-10.0.0.1:49624.service - OpenSSH per-connection server daemon (10.0.0.1:49624). May 15 23:59:03.113057 systemd[1]: sshd@22-10.0.0.111:22-10.0.0.1:49616.service: Deactivated successfully. May 15 23:59:03.117223 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. May 15 23:59:03.122475 systemd[1]: session-23.scope: Deactivated successfully. May 15 23:59:03.130552 systemd-logind[1570]: Removed session 23. May 15 23:59:03.182359 sshd[4456]: Accepted publickey for core from 10.0.0.1 port 49624 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:03.186494 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:03.194125 systemd-logind[1570]: New session 24 of user core. May 15 23:59:03.205746 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 23:59:03.790273 sshd[4462]: Connection closed by 10.0.0.1 port 49624 May 15 23:59:03.790694 sshd-session[4456]: pam_unix(sshd:session): session closed for user core May 15 23:59:03.797202 systemd[1]: Started sshd@24-10.0.0.111:22-10.0.0.1:49636.service - OpenSSH per-connection server daemon (10.0.0.1:49636). May 15 23:59:03.797719 systemd[1]: sshd@23-10.0.0.111:22-10.0.0.1:49624.service: Deactivated successfully. May 15 23:59:03.801573 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. May 15 23:59:03.802466 systemd[1]: session-24.scope: Deactivated successfully. May 15 23:59:03.803633 systemd-logind[1570]: Removed session 24. May 15 23:59:03.841303 sshd[4469]: Accepted publickey for core from 10.0.0.1 port 49636 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:03.843499 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:03.849081 systemd-logind[1570]: New session 25 of user core. May 15 23:59:03.861513 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 23:59:04.096758 sshd[4475]: Connection closed by 10.0.0.1 port 49636 May 15 23:59:04.097240 sshd-session[4469]: pam_unix(sshd:session): session closed for user core May 15 23:59:04.102904 systemd[1]: sshd@24-10.0.0.111:22-10.0.0.1:49636.service: Deactivated successfully. May 15 23:59:04.106708 systemd[1]: session-25.scope: Deactivated successfully. May 15 23:59:04.107956 systemd-logind[1570]: Session 25 logged out. Waiting for processes to exit. May 15 23:59:04.109509 systemd-logind[1570]: Removed session 25. May 15 23:59:05.131341 kubelet[2837]: E0515 23:59:05.131216 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:07.130967 kubelet[2837]: E0515 23:59:07.130906 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:09.108334 systemd[1]: Started sshd@25-10.0.0.111:22-10.0.0.1:46240.service - OpenSSH per-connection server daemon (10.0.0.1:46240). May 15 23:59:09.151766 sshd[4487]: Accepted publickey for core from 10.0.0.1 port 46240 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:09.153446 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:09.157598 systemd-logind[1570]: New session 26 of user core. May 15 23:59:09.166312 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 23:59:09.285340 sshd[4490]: Connection closed by 10.0.0.1 port 46240 May 15 23:59:09.285711 sshd-session[4487]: pam_unix(sshd:session): session closed for user core May 15 23:59:09.290235 systemd[1]: sshd@25-10.0.0.111:22-10.0.0.1:46240.service: Deactivated successfully. May 15 23:59:09.292559 systemd-logind[1570]: Session 26 logged out. Waiting for processes to exit. May 15 23:59:09.292608 systemd[1]: session-26.scope: Deactivated successfully. May 15 23:59:09.294444 systemd-logind[1570]: Removed session 26. May 15 23:59:12.130816 kubelet[2837]: E0515 23:59:12.130773 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:14.304248 systemd[1]: Started sshd@26-10.0.0.111:22-10.0.0.1:46246.service - OpenSSH per-connection server daemon (10.0.0.1:46246). May 15 23:59:14.346525 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 46246 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:14.348771 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:14.353388 systemd-logind[1570]: New session 27 of user core. May 15 23:59:14.361307 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 23:59:14.493460 sshd[4508]: Connection closed by 10.0.0.1 port 46246 May 15 23:59:14.493892 sshd-session[4505]: pam_unix(sshd:session): session closed for user core May 15 23:59:14.498568 systemd[1]: sshd@26-10.0.0.111:22-10.0.0.1:46246.service: Deactivated successfully. May 15 23:59:14.501541 systemd[1]: session-27.scope: Deactivated successfully. May 15 23:59:14.503101 systemd-logind[1570]: Session 27 logged out. Waiting for processes to exit. May 15 23:59:14.504908 systemd-logind[1570]: Removed session 27. May 15 23:59:19.510224 systemd[1]: Started sshd@27-10.0.0.111:22-10.0.0.1:33582.service - OpenSSH per-connection server daemon (10.0.0.1:33582). May 15 23:59:19.555539 sshd[4522]: Accepted publickey for core from 10.0.0.1 port 33582 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:19.557622 sshd-session[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:19.562884 systemd-logind[1570]: New session 28 of user core. May 15 23:59:19.576476 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 23:59:19.713424 sshd[4525]: Connection closed by 10.0.0.1 port 33582 May 15 23:59:19.713872 sshd-session[4522]: pam_unix(sshd:session): session closed for user core May 15 23:59:19.719670 systemd[1]: sshd@27-10.0.0.111:22-10.0.0.1:33582.service: Deactivated successfully. May 15 23:59:19.727455 systemd[1]: session-28.scope: Deactivated successfully. May 15 23:59:19.728732 systemd-logind[1570]: Session 28 logged out. Waiting for processes to exit. May 15 23:59:19.730324 systemd-logind[1570]: Removed session 28. May 15 23:59:24.723231 systemd[1]: Started sshd@28-10.0.0.111:22-10.0.0.1:33584.service - OpenSSH per-connection server daemon (10.0.0.1:33584). May 15 23:59:24.771140 sshd[4540]: Accepted publickey for core from 10.0.0.1 port 33584 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:24.773198 sshd-session[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:24.779917 systemd-logind[1570]: New session 29 of user core. May 15 23:59:24.790296 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 23:59:24.924756 sshd[4543]: Connection closed by 10.0.0.1 port 33584 May 15 23:59:24.925291 sshd-session[4540]: pam_unix(sshd:session): session closed for user core May 15 23:59:24.937315 systemd[1]: Started sshd@29-10.0.0.111:22-10.0.0.1:33588.service - OpenSSH per-connection server daemon (10.0.0.1:33588). May 15 23:59:24.938381 systemd[1]: sshd@28-10.0.0.111:22-10.0.0.1:33584.service: Deactivated successfully. May 15 23:59:24.942676 systemd[1]: session-29.scope: Deactivated successfully. May 15 23:59:24.944171 systemd-logind[1570]: Session 29 logged out. Waiting for processes to exit. May 15 23:59:24.945633 systemd-logind[1570]: Removed session 29. May 15 23:59:24.982094 sshd[4554]: Accepted publickey for core from 10.0.0.1 port 33588 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:24.984323 sshd-session[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:24.989925 systemd-logind[1570]: New session 30 of user core. May 15 23:59:24.996248 systemd[1]: Started session-30.scope - Session 30 of User core. May 15 23:59:27.212610 systemd[1]: run-containerd-runc-k8s.io-3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9-runc.FPCLQs.mount: Deactivated successfully. May 15 23:59:27.249179 containerd[1586]: time="2025-05-15T23:59:27.249105162Z" level=info msg="StopContainer for \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\" with timeout 30 (s)" May 15 23:59:27.249893 containerd[1586]: time="2025-05-15T23:59:27.249552506Z" level=info msg="Stop container \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\" with signal terminated" May 15 23:59:27.268161 containerd[1586]: time="2025-05-15T23:59:27.268046649Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:59:27.275887 containerd[1586]: time="2025-05-15T23:59:27.274593106Z" level=info msg="StopContainer for \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\" with timeout 2 (s)" May 15 23:59:27.275887 containerd[1586]: time="2025-05-15T23:59:27.275187518Z" level=info msg="Stop container \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\" with signal terminated" May 15 23:59:27.286028 systemd-networkd[1252]: lxc_health: Link DOWN May 15 23:59:27.286608 systemd-networkd[1252]: lxc_health: Lost carrier May 15 23:59:27.297010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8-rootfs.mount: Deactivated successfully. May 15 23:59:27.350221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9-rootfs.mount: Deactivated successfully. May 15 23:59:27.501093 containerd[1586]: time="2025-05-15T23:59:27.500833376Z" level=info msg="shim disconnected" id=3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9 namespace=k8s.io May 15 23:59:27.501093 containerd[1586]: time="2025-05-15T23:59:27.500914710Z" level=warning msg="cleaning up after shim disconnected" id=3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9 namespace=k8s.io May 15 23:59:27.501093 containerd[1586]: time="2025-05-15T23:59:27.500923156Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:59:27.501340 containerd[1586]: time="2025-05-15T23:59:27.501104548Z" level=info msg="shim disconnected" id=0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8 namespace=k8s.io May 15 23:59:27.501340 containerd[1586]: time="2025-05-15T23:59:27.501167426Z" level=warning msg="cleaning up after shim disconnected" id=0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8 namespace=k8s.io May 15 23:59:27.501340 containerd[1586]: time="2025-05-15T23:59:27.501176354Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:59:27.516043 containerd[1586]: time="2025-05-15T23:59:27.515969145Z" level=warning msg="cleanup warnings time=\"2025-05-15T23:59:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 23:59:27.602469 containerd[1586]: time="2025-05-15T23:59:27.602404082Z" level=info msg="StopContainer for \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\" returns successfully" May 15 23:59:27.602910 containerd[1586]: time="2025-05-15T23:59:27.602562982Z" level=info msg="StopContainer for \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\" returns successfully" May 15 23:59:27.608078 containerd[1586]: time="2025-05-15T23:59:27.608004556Z" level=info msg="StopPodSandbox for \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\"" May 15 23:59:27.609869 containerd[1586]: time="2025-05-15T23:59:27.609823136Z" level=info msg="StopPodSandbox for \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\"" May 15 23:59:27.612659 containerd[1586]: time="2025-05-15T23:59:27.609881346Z" level=info msg="Container to stop \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:59:27.614183 containerd[1586]: time="2025-05-15T23:59:27.608085919Z" level=info msg="Container to stop \"a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:59:27.614183 containerd[1586]: time="2025-05-15T23:59:27.614170435Z" level=info msg="Container to stop \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:59:27.614183 containerd[1586]: time="2025-05-15T23:59:27.614182248Z" level=info msg="Container to stop \"3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:59:27.614314 containerd[1586]: time="2025-05-15T23:59:27.614191866Z" level=info msg="Container to stop \"80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:59:27.614314 containerd[1586]: time="2025-05-15T23:59:27.614201935Z" level=info msg="Container to stop \"7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:59:27.615657 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f-shm.mount: Deactivated successfully. May 15 23:59:27.676984 containerd[1586]: time="2025-05-15T23:59:27.676819471Z" level=info msg="shim disconnected" id=f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f namespace=k8s.io May 15 23:59:27.677481 containerd[1586]: time="2025-05-15T23:59:27.677428238Z" level=warning msg="cleaning up after shim disconnected" id=f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f namespace=k8s.io May 15 23:59:27.677481 containerd[1586]: time="2025-05-15T23:59:27.677454097Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:59:27.677907 containerd[1586]: time="2025-05-15T23:59:27.677095231Z" level=info msg="shim disconnected" id=60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f namespace=k8s.io May 15 23:59:27.677907 containerd[1586]: time="2025-05-15T23:59:27.677865363Z" level=warning msg="cleaning up after shim disconnected" id=60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f namespace=k8s.io May 15 23:59:27.677907 containerd[1586]: time="2025-05-15T23:59:27.677881123Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:59:27.695461 containerd[1586]: time="2025-05-15T23:59:27.695396700Z" level=warning msg="cleanup warnings time=\"2025-05-15T23:59:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 23:59:27.696159 containerd[1586]: time="2025-05-15T23:59:27.696101659Z" level=warning msg="cleanup warnings time=\"2025-05-15T23:59:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 23:59:27.697294 containerd[1586]: time="2025-05-15T23:59:27.697249424Z" level=info msg="TearDown network for sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" successfully" May 15 23:59:27.697294 containerd[1586]: time="2025-05-15T23:59:27.697281003Z" level=info msg="StopPodSandbox for \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" returns successfully" May 15 23:59:27.698157 containerd[1586]: time="2025-05-15T23:59:27.698106491Z" level=info msg="TearDown network for sandbox \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\" successfully" May 15 23:59:27.698157 containerd[1586]: time="2025-05-15T23:59:27.698133080Z" level=info msg="StopPodSandbox for \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\" returns successfully" May 15 23:59:27.810628 kubelet[2837]: I0515 23:59:27.810480 2837 scope.go:117] "RemoveContainer" containerID="0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8" May 15 23:59:27.818109 containerd[1586]: time="2025-05-15T23:59:27.818035493Z" level=info msg="RemoveContainer for \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\"" May 15 23:59:27.842059 kubelet[2837]: I0515 23:59:27.841987 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-etc-cni-netd\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842059 kubelet[2837]: I0515 23:59:27.842063 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-lib-modules\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842270 kubelet[2837]: I0515 23:59:27.842098 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/982621c9-3a35-4097-ae32-aed0ffd937f2-cilium-config-path\") pod \"982621c9-3a35-4097-ae32-aed0ffd937f2\" (UID: \"982621c9-3a35-4097-ae32-aed0ffd937f2\") " May 15 23:59:27.842270 kubelet[2837]: I0515 23:59:27.842127 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj2vh\" (UniqueName: \"kubernetes.io/projected/982621c9-3a35-4097-ae32-aed0ffd937f2-kube-api-access-rj2vh\") pod \"982621c9-3a35-4097-ae32-aed0ffd937f2\" (UID: \"982621c9-3a35-4097-ae32-aed0ffd937f2\") " May 15 23:59:27.842270 kubelet[2837]: I0515 23:59:27.842143 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:59:27.842270 kubelet[2837]: I0515 23:59:27.842173 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-hostproc" (OuterVolumeSpecName: "hostproc") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:59:27.842270 kubelet[2837]: I0515 23:59:27.842150 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-hostproc\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842270 kubelet[2837]: I0515 23:59:27.842211 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-bpf-maps\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842494 kubelet[2837]: I0515 23:59:27.842218 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:59:27.842494 kubelet[2837]: I0515 23:59:27.842238 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cilium-config-path\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842494 kubelet[2837]: I0515 23:59:27.842323 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cilium-run\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842494 kubelet[2837]: I0515 23:59:27.842350 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-xtables-lock\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842494 kubelet[2837]: I0515 23:59:27.842375 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cni-path\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842494 kubelet[2837]: I0515 23:59:27.842393 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-host-proc-sys-net\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842730 kubelet[2837]: I0515 23:59:27.842423 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztvch\" (UniqueName: \"kubernetes.io/projected/ddf24b30-b9c0-4ac7-beaa-0760584a6072-kube-api-access-ztvch\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842730 kubelet[2837]: I0515 23:59:27.842448 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddf24b30-b9c0-4ac7-beaa-0760584a6072-clustermesh-secrets\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842730 kubelet[2837]: I0515 23:59:27.842468 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cilium-cgroup\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842730 kubelet[2837]: I0515 23:59:27.842488 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-host-proc-sys-kernel\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842730 kubelet[2837]: I0515 23:59:27.842516 2837 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddf24b30-b9c0-4ac7-beaa-0760584a6072-hubble-tls\") pod \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\" (UID: \"ddf24b30-b9c0-4ac7-beaa-0760584a6072\") " May 15 23:59:27.842730 kubelet[2837]: I0515 23:59:27.842568 2837 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 23:59:27.843000 kubelet[2837]: I0515 23:59:27.842578 2837 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 23:59:27.843000 kubelet[2837]: I0515 23:59:27.842590 2837 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 23:59:27.843091 kubelet[2837]: I0515 23:59:27.843048 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:59:27.843126 kubelet[2837]: I0515 23:59:27.843090 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:59:27.843126 kubelet[2837]: I0515 23:59:27.843109 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:59:27.843207 kubelet[2837]: I0515 23:59:27.843127 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cni-path" (OuterVolumeSpecName: "cni-path") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:59:27.844387 kubelet[2837]: I0515 23:59:27.844014 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:59:27.844387 kubelet[2837]: I0515 23:59:27.844076 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:59:27.844677 kubelet[2837]: I0515 23:59:27.844628 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:59:27.846001 kubelet[2837]: I0515 23:59:27.845978 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/982621c9-3a35-4097-ae32-aed0ffd937f2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "982621c9-3a35-4097-ae32-aed0ffd937f2" (UID: "982621c9-3a35-4097-ae32-aed0ffd937f2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 23:59:27.846952 kubelet[2837]: I0515 23:59:27.846922 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 23:59:27.943614 kubelet[2837]: I0515 23:59:27.943549 2837 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/982621c9-3a35-4097-ae32-aed0ffd937f2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 23:59:27.943614 kubelet[2837]: I0515 23:59:27.943586 2837 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 23:59:27.943614 kubelet[2837]: I0515 23:59:27.943595 2837 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 23:59:27.943614 kubelet[2837]: I0515 23:59:27.943608 2837 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 23:59:27.943614 kubelet[2837]: I0515 23:59:27.943616 2837 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 23:59:27.943614 kubelet[2837]: I0515 23:59:27.943624 2837 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 23:59:27.943614 kubelet[2837]: I0515 23:59:27.943631 2837 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 23:59:27.943614 kubelet[2837]: I0515 23:59:27.943639 2837 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 23:59:27.944148 kubelet[2837]: I0515 23:59:27.943646 2837 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddf24b30-b9c0-4ac7-beaa-0760584a6072-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 23:59:27.952653 kubelet[2837]: I0515 23:59:27.952612 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf24b30-b9c0-4ac7-beaa-0760584a6072-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 23:59:27.952846 kubelet[2837]: I0515 23:59:27.952765 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/982621c9-3a35-4097-ae32-aed0ffd937f2-kube-api-access-rj2vh" (OuterVolumeSpecName: "kube-api-access-rj2vh") pod "982621c9-3a35-4097-ae32-aed0ffd937f2" (UID: "982621c9-3a35-4097-ae32-aed0ffd937f2"). InnerVolumeSpecName "kube-api-access-rj2vh". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 23:59:27.952846 kubelet[2837]: I0515 23:59:27.952771 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf24b30-b9c0-4ac7-beaa-0760584a6072-kube-api-access-ztvch" (OuterVolumeSpecName: "kube-api-access-ztvch") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "kube-api-access-ztvch". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 23:59:27.952979 kubelet[2837]: I0515 23:59:27.952938 2837 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf24b30-b9c0-4ac7-beaa-0760584a6072-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ddf24b30-b9c0-4ac7-beaa-0760584a6072" (UID: "ddf24b30-b9c0-4ac7-beaa-0760584a6072"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 23:59:28.044290 kubelet[2837]: I0515 23:59:28.044214 2837 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztvch\" (UniqueName: \"kubernetes.io/projected/ddf24b30-b9c0-4ac7-beaa-0760584a6072-kube-api-access-ztvch\") on node \"localhost\" DevicePath \"\"" May 15 23:59:28.044290 kubelet[2837]: I0515 23:59:28.044270 2837 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddf24b30-b9c0-4ac7-beaa-0760584a6072-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 23:59:28.044290 kubelet[2837]: I0515 23:59:28.044283 2837 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddf24b30-b9c0-4ac7-beaa-0760584a6072-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 23:59:28.044290 kubelet[2837]: I0515 23:59:28.044296 2837 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj2vh\" (UniqueName: \"kubernetes.io/projected/982621c9-3a35-4097-ae32-aed0ffd937f2-kube-api-access-rj2vh\") on node \"localhost\" DevicePath \"\"" May 15 23:59:28.155878 containerd[1586]: time="2025-05-15T23:59:28.154633628Z" level=info msg="RemoveContainer for \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\" returns successfully" May 15 23:59:28.156080 kubelet[2837]: I0515 23:59:28.155072 2837 scope.go:117] "RemoveContainer" containerID="0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8" May 15 23:59:28.156132 containerd[1586]: time="2025-05-15T23:59:28.156059567Z" level=error msg="ContainerStatus for \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\": not found" May 15 23:59:28.163240 kubelet[2837]: E0515 23:59:28.163210 2837 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\": not found" containerID="0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8" May 15 23:59:28.163312 kubelet[2837]: I0515 23:59:28.163246 2837 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8"} err="failed to get container status \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\": rpc error: code = NotFound desc = an error occurred when try to find container \"0853cc1d3ce2ca02c35c7dc32a7eb6331e3b7ab00680d111147cf2ffa4455ed8\": not found" May 15 23:59:28.163344 kubelet[2837]: I0515 23:59:28.163319 2837 scope.go:117] "RemoveContainer" containerID="3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9" May 15 23:59:28.164375 containerd[1586]: time="2025-05-15T23:59:28.164344192Z" level=info msg="RemoveContainer for \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\"" May 15 23:59:28.205953 containerd[1586]: time="2025-05-15T23:59:28.205893406Z" level=info msg="RemoveContainer for \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\" returns successfully" May 15 23:59:28.206492 kubelet[2837]: I0515 23:59:28.206461 2837 scope.go:117] "RemoveContainer" containerID="a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017" May 15 23:59:28.208332 containerd[1586]: time="2025-05-15T23:59:28.208267103Z" level=info msg="RemoveContainer for \"a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017\"" May 15 23:59:28.209485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f-rootfs.mount: Deactivated successfully. May 15 23:59:28.209729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f-rootfs.mount: Deactivated successfully. May 15 23:59:28.209916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f-shm.mount: Deactivated successfully. May 15 23:59:28.210094 systemd[1]: var-lib-kubelet-pods-982621c9\x2d3a35\x2d4097\x2dae32\x2daed0ffd937f2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drj2vh.mount: Deactivated successfully. May 15 23:59:28.210241 systemd[1]: var-lib-kubelet-pods-ddf24b30\x2db9c0\x2d4ac7\x2dbeaa\x2d0760584a6072-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 23:59:28.210417 systemd[1]: var-lib-kubelet-pods-ddf24b30\x2db9c0\x2d4ac7\x2dbeaa\x2d0760584a6072-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dztvch.mount: Deactivated successfully. May 15 23:59:28.210578 systemd[1]: var-lib-kubelet-pods-ddf24b30\x2db9c0\x2d4ac7\x2dbeaa\x2d0760584a6072-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 23:59:28.223285 containerd[1586]: time="2025-05-15T23:59:28.223189427Z" level=info msg="RemoveContainer for \"a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017\" returns successfully" May 15 23:59:28.223615 kubelet[2837]: I0515 23:59:28.223568 2837 scope.go:117] "RemoveContainer" containerID="7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05" May 15 23:59:28.226791 containerd[1586]: time="2025-05-15T23:59:28.226731877Z" level=info msg="RemoveContainer for \"7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05\"" May 15 23:59:28.234119 containerd[1586]: time="2025-05-15T23:59:28.234038138Z" level=info msg="RemoveContainer for \"7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05\" returns successfully" May 15 23:59:28.235184 kubelet[2837]: I0515 23:59:28.234404 2837 scope.go:117] "RemoveContainer" containerID="80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6" May 15 23:59:28.236766 containerd[1586]: time="2025-05-15T23:59:28.236719084Z" level=info msg="RemoveContainer for \"80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6\"" May 15 23:59:28.247572 containerd[1586]: time="2025-05-15T23:59:28.247510867Z" level=info msg="RemoveContainer for \"80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6\" returns successfully" May 15 23:59:28.247891 kubelet[2837]: I0515 23:59:28.247834 2837 scope.go:117] "RemoveContainer" containerID="3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848" May 15 23:59:28.249660 containerd[1586]: time="2025-05-15T23:59:28.249605308Z" level=info msg="RemoveContainer for \"3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848\"" May 15 23:59:28.257517 containerd[1586]: time="2025-05-15T23:59:28.257444773Z" level=info msg="RemoveContainer for \"3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848\" returns successfully" May 15 23:59:28.257816 kubelet[2837]: I0515 23:59:28.257778 2837 scope.go:117] "RemoveContainer" containerID="3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9" May 15 23:59:28.258280 containerd[1586]: time="2025-05-15T23:59:28.258211720Z" level=error msg="ContainerStatus for \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\": not found" May 15 23:59:28.258449 kubelet[2837]: E0515 23:59:28.258367 2837 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\": not found" containerID="3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9" May 15 23:59:28.258449 kubelet[2837]: I0515 23:59:28.258404 2837 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9"} err="failed to get container status \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e311907d0d07974a127a746ec63b9e536a787aaccc86bb3c1af619152ef89c9\": not found" May 15 23:59:28.258449 kubelet[2837]: I0515 23:59:28.258436 2837 scope.go:117] "RemoveContainer" containerID="a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017" May 15 23:59:28.258699 containerd[1586]: time="2025-05-15T23:59:28.258637643Z" level=error msg="ContainerStatus for \"a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017\": not found" May 15 23:59:28.258811 kubelet[2837]: E0515 23:59:28.258781 2837 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017\": not found" containerID="a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017" May 15 23:59:28.258811 kubelet[2837]: I0515 23:59:28.258804 2837 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017"} err="failed to get container status \"a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0529a1575680e725dffe41d06b395f1a5ec56e8b0323d567219f0a1e0dad017\": not found" May 15 23:59:28.258919 kubelet[2837]: I0515 23:59:28.258818 2837 scope.go:117] "RemoveContainer" containerID="7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05" May 15 23:59:28.259138 containerd[1586]: time="2025-05-15T23:59:28.259084215Z" level=error msg="ContainerStatus for \"7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05\": not found" May 15 23:59:28.259282 kubelet[2837]: E0515 23:59:28.259249 2837 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05\": not found" containerID="7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05" May 15 23:59:28.259337 kubelet[2837]: I0515 23:59:28.259281 2837 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05"} err="failed to get container status \"7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d80b421712e26e2c292ce8c8cf559afcc16d286794442775a5e56a1dd0ced05\": not found" May 15 23:59:28.259337 kubelet[2837]: I0515 23:59:28.259303 2837 scope.go:117] "RemoveContainer" containerID="80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6" May 15 23:59:28.259681 containerd[1586]: time="2025-05-15T23:59:28.259615478Z" level=error msg="ContainerStatus for \"80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6\": not found" May 15 23:59:28.259891 kubelet[2837]: E0515 23:59:28.259841 2837 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6\": not found" containerID="80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6" May 15 23:59:28.259947 kubelet[2837]: I0515 23:59:28.259890 2837 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6"} err="failed to get container status \"80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"80b04f6f55fd3e1f451ff521d46c99d881fa88d41ebe1ba0c2083651ad0704e6\": not found" May 15 23:59:28.259947 kubelet[2837]: I0515 23:59:28.259909 2837 scope.go:117] "RemoveContainer" containerID="3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848" May 15 23:59:28.260157 containerd[1586]: time="2025-05-15T23:59:28.260124337Z" level=error msg="ContainerStatus for \"3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848\": not found" May 15 23:59:28.260448 kubelet[2837]: E0515 23:59:28.260409 2837 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848\": not found" containerID="3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848" May 15 23:59:28.260506 kubelet[2837]: I0515 23:59:28.260460 2837 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848"} err="failed to get container status \"3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d5144d35451b904a3281c4cf22f70bfc099f771d0211a273f912c1b95e45848\": not found" May 15 23:59:29.016229 sshd[4560]: Connection closed by 10.0.0.1 port 33588 May 15 23:59:29.016911 sshd-session[4554]: pam_unix(sshd:session): session closed for user core May 15 23:59:29.023336 systemd[1]: Started sshd@30-10.0.0.111:22-10.0.0.1:41378.service - OpenSSH per-connection server daemon (10.0.0.1:41378). May 15 23:59:29.024388 systemd[1]: sshd@29-10.0.0.111:22-10.0.0.1:33588.service: Deactivated successfully. May 15 23:59:29.026951 systemd[1]: session-30.scope: Deactivated successfully. May 15 23:59:29.027948 systemd-logind[1570]: Session 30 logged out. Waiting for processes to exit. May 15 23:59:29.029791 systemd-logind[1570]: Removed session 30. May 15 23:59:29.082889 sshd[4722]: Accepted publickey for core from 10.0.0.1 port 41378 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:29.085168 sshd-session[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:29.090622 systemd-logind[1570]: New session 31 of user core. May 15 23:59:29.099872 systemd[1]: Started session-31.scope - Session 31 of User core. May 15 23:59:29.227448 kubelet[2837]: E0515 23:59:29.227406 2837 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 23:59:29.631216 sshd[4728]: Connection closed by 10.0.0.1 port 41378 May 15 23:59:29.631612 sshd-session[4722]: pam_unix(sshd:session): session closed for user core May 15 23:59:29.642587 systemd[1]: Started sshd@31-10.0.0.111:22-10.0.0.1:41394.service - OpenSSH per-connection server daemon (10.0.0.1:41394). May 15 23:59:29.643283 systemd[1]: sshd@30-10.0.0.111:22-10.0.0.1:41378.service: Deactivated successfully. May 15 23:59:29.650616 kubelet[2837]: E0515 23:59:29.648634 2837 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddf24b30-b9c0-4ac7-beaa-0760584a6072" containerName="mount-bpf-fs" May 15 23:59:29.650616 kubelet[2837]: E0515 23:59:29.648666 2837 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="982621c9-3a35-4097-ae32-aed0ffd937f2" containerName="cilium-operator" May 15 23:59:29.650616 kubelet[2837]: E0515 23:59:29.648675 2837 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddf24b30-b9c0-4ac7-beaa-0760584a6072" containerName="clean-cilium-state" May 15 23:59:29.650616 kubelet[2837]: E0515 23:59:29.648687 2837 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddf24b30-b9c0-4ac7-beaa-0760584a6072" containerName="apply-sysctl-overwrites" May 15 23:59:29.650616 kubelet[2837]: E0515 23:59:29.648700 2837 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddf24b30-b9c0-4ac7-beaa-0760584a6072" containerName="mount-cgroup" May 15 23:59:29.650616 kubelet[2837]: E0515 23:59:29.648736 2837 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddf24b30-b9c0-4ac7-beaa-0760584a6072" containerName="cilium-agent" May 15 23:59:29.650616 kubelet[2837]: I0515 23:59:29.648767 2837 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf24b30-b9c0-4ac7-beaa-0760584a6072" containerName="cilium-agent" May 15 23:59:29.650616 kubelet[2837]: I0515 23:59:29.648777 2837 memory_manager.go:354] "RemoveStaleState removing state" podUID="982621c9-3a35-4097-ae32-aed0ffd937f2" containerName="cilium-operator" May 15 23:59:29.654107 systemd[1]: session-31.scope: Deactivated successfully. May 15 23:59:29.657737 kubelet[2837]: W0515 23:59:29.654402 2837 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 23:59:29.657737 kubelet[2837]: E0515 23:59:29.654483 2837 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 23:59:29.667455 systemd-logind[1570]: Session 31 logged out. Waiting for processes to exit. May 15 23:59:29.673526 systemd-logind[1570]: Removed session 31. May 15 23:59:29.699900 sshd[4736]: Accepted publickey for core from 10.0.0.1 port 41394 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:29.702280 sshd-session[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:29.710706 systemd-logind[1570]: New session 32 of user core. May 15 23:59:29.720526 systemd[1]: Started session-32.scope - Session 32 of User core. May 15 23:59:29.755097 kubelet[2837]: I0515 23:59:29.755007 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1606fb8a-1ff2-40cd-b7fd-4465710630b5-bpf-maps\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755097 kubelet[2837]: I0515 23:59:29.755078 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1606fb8a-1ff2-40cd-b7fd-4465710630b5-xtables-lock\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755332 kubelet[2837]: I0515 23:59:29.755117 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1606fb8a-1ff2-40cd-b7fd-4465710630b5-etc-cni-netd\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755332 kubelet[2837]: I0515 23:59:29.755143 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1606fb8a-1ff2-40cd-b7fd-4465710630b5-lib-modules\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755332 kubelet[2837]: I0515 23:59:29.755165 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1606fb8a-1ff2-40cd-b7fd-4465710630b5-host-proc-sys-net\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755332 kubelet[2837]: I0515 23:59:29.755185 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1606fb8a-1ff2-40cd-b7fd-4465710630b5-hubble-tls\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755332 kubelet[2837]: I0515 23:59:29.755207 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1606fb8a-1ff2-40cd-b7fd-4465710630b5-cilium-run\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755332 kubelet[2837]: I0515 23:59:29.755226 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1606fb8a-1ff2-40cd-b7fd-4465710630b5-host-proc-sys-kernel\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755531 kubelet[2837]: I0515 23:59:29.755249 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1606fb8a-1ff2-40cd-b7fd-4465710630b5-cilium-cgroup\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755531 kubelet[2837]: I0515 23:59:29.755269 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1606fb8a-1ff2-40cd-b7fd-4465710630b5-cilium-ipsec-secrets\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755531 kubelet[2837]: I0515 23:59:29.755292 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mffzw\" (UniqueName: \"kubernetes.io/projected/1606fb8a-1ff2-40cd-b7fd-4465710630b5-kube-api-access-mffzw\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755531 kubelet[2837]: I0515 23:59:29.755312 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1606fb8a-1ff2-40cd-b7fd-4465710630b5-hostproc\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755531 kubelet[2837]: I0515 23:59:29.755332 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1606fb8a-1ff2-40cd-b7fd-4465710630b5-cni-path\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755531 kubelet[2837]: I0515 23:59:29.755372 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1606fb8a-1ff2-40cd-b7fd-4465710630b5-clustermesh-secrets\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.755670 kubelet[2837]: I0515 23:59:29.755516 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1606fb8a-1ff2-40cd-b7fd-4465710630b5-cilium-config-path\") pod \"cilium-9pmtp\" (UID: \"1606fb8a-1ff2-40cd-b7fd-4465710630b5\") " pod="kube-system/cilium-9pmtp" May 15 23:59:29.778317 sshd[4742]: Connection closed by 10.0.0.1 port 41394 May 15 23:59:29.778195 sshd-session[4736]: pam_unix(sshd:session): session closed for user core May 15 23:59:29.789424 systemd[1]: Started sshd@32-10.0.0.111:22-10.0.0.1:41410.service - OpenSSH per-connection server daemon (10.0.0.1:41410). May 15 23:59:29.790214 systemd[1]: sshd@31-10.0.0.111:22-10.0.0.1:41394.service: Deactivated successfully. May 15 23:59:29.792581 systemd[1]: session-32.scope: Deactivated successfully. May 15 23:59:29.794958 systemd-logind[1570]: Session 32 logged out. Waiting for processes to exit. May 15 23:59:29.797439 systemd-logind[1570]: Removed session 32. May 15 23:59:29.836681 sshd[4745]: Accepted publickey for core from 10.0.0.1 port 41410 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:29.839231 sshd-session[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:29.847580 systemd-logind[1570]: New session 33 of user core. May 15 23:59:29.860539 systemd[1]: Started session-33.scope - Session 33 of User core. May 15 23:59:30.134894 kubelet[2837]: I0515 23:59:30.134773 2837 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="982621c9-3a35-4097-ae32-aed0ffd937f2" path="/var/lib/kubelet/pods/982621c9-3a35-4097-ae32-aed0ffd937f2/volumes" May 15 23:59:30.135913 kubelet[2837]: I0515 23:59:30.135413 2837 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddf24b30-b9c0-4ac7-beaa-0760584a6072" path="/var/lib/kubelet/pods/ddf24b30-b9c0-4ac7-beaa-0760584a6072/volumes" May 15 23:59:30.861306 kubelet[2837]: E0515 23:59:30.861221 2837 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 15 23:59:30.862066 kubelet[2837]: E0515 23:59:30.861368 2837 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1606fb8a-1ff2-40cd-b7fd-4465710630b5-clustermesh-secrets podName:1606fb8a-1ff2-40cd-b7fd-4465710630b5 nodeName:}" failed. No retries permitted until 2025-05-15 23:59:31.36133852 +0000 UTC m=+107.325249807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/1606fb8a-1ff2-40cd-b7fd-4465710630b5-clustermesh-secrets") pod "cilium-9pmtp" (UID: "1606fb8a-1ff2-40cd-b7fd-4465710630b5") : failed to sync secret cache: timed out waiting for the condition May 15 23:59:31.466422 kubelet[2837]: E0515 23:59:31.466381 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:31.467327 containerd[1586]: time="2025-05-15T23:59:31.467036379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9pmtp,Uid:1606fb8a-1ff2-40cd-b7fd-4465710630b5,Namespace:kube-system,Attempt:0,}" May 15 23:59:31.508242 containerd[1586]: time="2025-05-15T23:59:31.507841347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:59:31.508242 containerd[1586]: time="2025-05-15T23:59:31.507940895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:59:31.508242 containerd[1586]: time="2025-05-15T23:59:31.507959089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:31.514291 containerd[1586]: time="2025-05-15T23:59:31.512489463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:31.579676 containerd[1586]: time="2025-05-15T23:59:31.579496872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9pmtp,Uid:1606fb8a-1ff2-40cd-b7fd-4465710630b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"affce511af1082b6b56e599740647d5bed95d694dd68c8ab7629c0b5db1f04bb\"" May 15 23:59:31.580621 kubelet[2837]: E0515 23:59:31.580572 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:31.583003 containerd[1586]: time="2025-05-15T23:59:31.582944252Z" level=info msg="CreateContainer within sandbox \"affce511af1082b6b56e599740647d5bed95d694dd68c8ab7629c0b5db1f04bb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:59:31.617173 containerd[1586]: time="2025-05-15T23:59:31.616747317Z" level=info msg="CreateContainer within sandbox \"affce511af1082b6b56e599740647d5bed95d694dd68c8ab7629c0b5db1f04bb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"46a1769f90bde24adc8c636f99071f2710ed453960ea2843b145df6da319dac1\"" May 15 23:59:31.618625 containerd[1586]: time="2025-05-15T23:59:31.617549370Z" level=info msg="StartContainer for \"46a1769f90bde24adc8c636f99071f2710ed453960ea2843b145df6da319dac1\"" May 15 23:59:31.698128 containerd[1586]: time="2025-05-15T23:59:31.697399527Z" level=info msg="StartContainer for \"46a1769f90bde24adc8c636f99071f2710ed453960ea2843b145df6da319dac1\" returns successfully" May 15 23:59:31.759142 containerd[1586]: time="2025-05-15T23:59:31.758732536Z" level=info msg="shim disconnected" id=46a1769f90bde24adc8c636f99071f2710ed453960ea2843b145df6da319dac1 namespace=k8s.io May 15 23:59:31.759142 containerd[1586]: time="2025-05-15T23:59:31.758794052Z" level=warning msg="cleaning up after shim disconnected" id=46a1769f90bde24adc8c636f99071f2710ed453960ea2843b145df6da319dac1 namespace=k8s.io May 15 23:59:31.759142 containerd[1586]: time="2025-05-15T23:59:31.758802728Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:59:31.840373 kubelet[2837]: E0515 23:59:31.840319 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:31.842893 containerd[1586]: time="2025-05-15T23:59:31.842722540Z" level=info msg="CreateContainer within sandbox \"affce511af1082b6b56e599740647d5bed95d694dd68c8ab7629c0b5db1f04bb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:59:31.864217 containerd[1586]: time="2025-05-15T23:59:31.864153354Z" level=info msg="CreateContainer within sandbox \"affce511af1082b6b56e599740647d5bed95d694dd68c8ab7629c0b5db1f04bb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aaa73db57edf9a7386d4b15c7c68350e767b8d85ad832b8ff815bc982425d985\"" May 15 23:59:31.866514 containerd[1586]: time="2025-05-15T23:59:31.866475643Z" level=info msg="StartContainer for \"aaa73db57edf9a7386d4b15c7c68350e767b8d85ad832b8ff815bc982425d985\"" May 15 23:59:31.946002 containerd[1586]: time="2025-05-15T23:59:31.945947046Z" level=info msg="StartContainer for \"aaa73db57edf9a7386d4b15c7c68350e767b8d85ad832b8ff815bc982425d985\" returns successfully" May 15 23:59:31.985579 containerd[1586]: time="2025-05-15T23:59:31.985501977Z" level=info msg="shim disconnected" id=aaa73db57edf9a7386d4b15c7c68350e767b8d85ad832b8ff815bc982425d985 namespace=k8s.io May 15 23:59:31.985579 containerd[1586]: time="2025-05-15T23:59:31.985571488Z" level=warning msg="cleaning up after shim disconnected" id=aaa73db57edf9a7386d4b15c7c68350e767b8d85ad832b8ff815bc982425d985 namespace=k8s.io May 15 23:59:31.985579 containerd[1586]: time="2025-05-15T23:59:31.985586296Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:59:32.375886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount576517750.mount: Deactivated successfully. May 15 23:59:32.843844 kubelet[2837]: E0515 23:59:32.843803 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:32.845513 containerd[1586]: time="2025-05-15T23:59:32.845470039Z" level=info msg="CreateContainer within sandbox \"affce511af1082b6b56e599740647d5bed95d694dd68c8ab7629c0b5db1f04bb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:59:32.863938 containerd[1586]: time="2025-05-15T23:59:32.863835521Z" level=info msg="CreateContainer within sandbox \"affce511af1082b6b56e599740647d5bed95d694dd68c8ab7629c0b5db1f04bb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cadf99c6e6b12fc0326cc5a6773b17a9e66645b34adef493e2a93527820e0e94\"" May 15 23:59:32.864597 containerd[1586]: time="2025-05-15T23:59:32.864539549Z" level=info msg="StartContainer for \"cadf99c6e6b12fc0326cc5a6773b17a9e66645b34adef493e2a93527820e0e94\"" May 15 23:59:32.940503 containerd[1586]: time="2025-05-15T23:59:32.940444841Z" level=info msg="StartContainer for \"cadf99c6e6b12fc0326cc5a6773b17a9e66645b34adef493e2a93527820e0e94\" returns successfully" May 15 23:59:32.974369 containerd[1586]: time="2025-05-15T23:59:32.974163162Z" level=info msg="shim disconnected" id=cadf99c6e6b12fc0326cc5a6773b17a9e66645b34adef493e2a93527820e0e94 namespace=k8s.io May 15 23:59:32.974369 containerd[1586]: time="2025-05-15T23:59:32.974236720Z" level=warning msg="cleaning up after shim disconnected" id=cadf99c6e6b12fc0326cc5a6773b17a9e66645b34adef493e2a93527820e0e94 namespace=k8s.io May 15 23:59:32.974369 containerd[1586]: time="2025-05-15T23:59:32.974247582Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:59:33.131236 kubelet[2837]: E0515 23:59:33.130751 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-7jrlc" podUID="80115612-719f-483b-8bf5-50a100712878" May 15 23:59:33.375865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cadf99c6e6b12fc0326cc5a6773b17a9e66645b34adef493e2a93527820e0e94-rootfs.mount: Deactivated successfully. May 15 23:59:33.848445 kubelet[2837]: E0515 23:59:33.848405 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:33.850629 containerd[1586]: time="2025-05-15T23:59:33.850569899Z" level=info msg="CreateContainer within sandbox \"affce511af1082b6b56e599740647d5bed95d694dd68c8ab7629c0b5db1f04bb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:59:33.873773 containerd[1586]: time="2025-05-15T23:59:33.873687208Z" level=info msg="CreateContainer within sandbox \"affce511af1082b6b56e599740647d5bed95d694dd68c8ab7629c0b5db1f04bb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9c9ba64d6425b4e1e67ad2a30f3e85796ced7a0e59be95b4b442b51eb95c76b2\"" May 15 23:59:33.874624 containerd[1586]: time="2025-05-15T23:59:33.874497727Z" level=info msg="StartContainer for \"9c9ba64d6425b4e1e67ad2a30f3e85796ced7a0e59be95b4b442b51eb95c76b2\"" May 15 23:59:34.043424 containerd[1586]: time="2025-05-15T23:59:34.043346512Z" level=info msg="StartContainer for \"9c9ba64d6425b4e1e67ad2a30f3e85796ced7a0e59be95b4b442b51eb95c76b2\" returns successfully" May 15 23:59:34.229914 containerd[1586]: time="2025-05-15T23:59:34.229314523Z" level=info msg="shim disconnected" id=9c9ba64d6425b4e1e67ad2a30f3e85796ced7a0e59be95b4b442b51eb95c76b2 namespace=k8s.io May 15 23:59:34.229914 containerd[1586]: time="2025-05-15T23:59:34.229634247Z" level=warning msg="cleaning up after shim disconnected" id=9c9ba64d6425b4e1e67ad2a30f3e85796ced7a0e59be95b4b442b51eb95c76b2 namespace=k8s.io May 15 23:59:34.229914 containerd[1586]: time="2025-05-15T23:59:34.229669563Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:59:34.230260 kubelet[2837]: E0515 23:59:34.229916 2837 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 23:59:34.377597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c9ba64d6425b4e1e67ad2a30f3e85796ced7a0e59be95b4b442b51eb95c76b2-rootfs.mount: Deactivated successfully. May 15 23:59:34.852638 kubelet[2837]: E0515 23:59:34.852603 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:34.854710 containerd[1586]: time="2025-05-15T23:59:34.854604396Z" level=info msg="CreateContainer within sandbox \"affce511af1082b6b56e599740647d5bed95d694dd68c8ab7629c0b5db1f04bb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:59:35.021743 containerd[1586]: time="2025-05-15T23:59:35.021676116Z" level=info msg="CreateContainer within sandbox \"affce511af1082b6b56e599740647d5bed95d694dd68c8ab7629c0b5db1f04bb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0f0ce89f4dd49d013f355a36b401b7706a2ba80ce44e1e764a3ee66447ea00cc\"" May 15 23:59:35.022572 containerd[1586]: time="2025-05-15T23:59:35.022401724Z" level=info msg="StartContainer for \"0f0ce89f4dd49d013f355a36b401b7706a2ba80ce44e1e764a3ee66447ea00cc\"" May 15 23:59:35.105826 containerd[1586]: time="2025-05-15T23:59:35.105556757Z" level=info msg="StartContainer for \"0f0ce89f4dd49d013f355a36b401b7706a2ba80ce44e1e764a3ee66447ea00cc\" returns successfully" May 15 23:59:35.130796 kubelet[2837]: E0515 23:59:35.130709 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-7jrlc" podUID="80115612-719f-483b-8bf5-50a100712878" May 15 23:59:35.131345 kubelet[2837]: E0515 23:59:35.131296 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-n99d9" podUID="3e75f841-c46f-4831-8f2c-d346d23f52ee" May 15 23:59:35.584931 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 15 23:59:35.862875 kubelet[2837]: E0515 23:59:35.862738 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:36.525827 kubelet[2837]: I0515 23:59:36.525762 2837 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T23:59:36Z","lastTransitionTime":"2025-05-15T23:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 23:59:37.131382 kubelet[2837]: E0515 23:59:37.131294 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-7jrlc" podUID="80115612-719f-483b-8bf5-50a100712878" May 15 23:59:37.131382 kubelet[2837]: E0515 23:59:37.131421 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-n99d9" podUID="3e75f841-c46f-4831-8f2c-d346d23f52ee" May 15 23:59:37.468311 kubelet[2837]: E0515 23:59:37.468141 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:39.123214 systemd-networkd[1252]: lxc_health: Link UP May 15 23:59:39.132699 kubelet[2837]: E0515 23:59:39.131345 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-7jrlc" podUID="80115612-719f-483b-8bf5-50a100712878" May 15 23:59:39.132699 kubelet[2837]: E0515 23:59:39.132007 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-n99d9" podUID="3e75f841-c46f-4831-8f2c-d346d23f52ee" May 15 23:59:39.135100 systemd-networkd[1252]: lxc_health: Gained carrier May 15 23:59:39.470198 kubelet[2837]: E0515 23:59:39.470034 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:39.694992 kubelet[2837]: I0515 23:59:39.693547 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9pmtp" podStartSLOduration=10.693525685000001 podStartE2EDuration="10.693525685s" podCreationTimestamp="2025-05-15 23:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:59:36.148293508 +0000 UTC m=+112.112204805" watchObservedRunningTime="2025-05-15 23:59:39.693525685 +0000 UTC m=+115.657436972" May 15 23:59:39.870101 kubelet[2837]: E0515 23:59:39.870063 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:40.871911 kubelet[2837]: E0515 23:59:40.871876 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:40.909007 systemd-networkd[1252]: lxc_health: Gained IPv6LL May 15 23:59:41.131617 kubelet[2837]: E0515 23:59:41.131089 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:41.131617 kubelet[2837]: E0515 23:59:41.131359 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:44.124485 containerd[1586]: time="2025-05-15T23:59:44.124318034Z" level=info msg="StopPodSandbox for \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\"" May 15 23:59:44.124485 containerd[1586]: time="2025-05-15T23:59:44.124417331Z" level=info msg="TearDown network for sandbox \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\" successfully" May 15 23:59:44.124485 containerd[1586]: time="2025-05-15T23:59:44.124427180Z" level=info msg="StopPodSandbox for \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\" returns successfully" May 15 23:59:44.125141 containerd[1586]: time="2025-05-15T23:59:44.124828335Z" level=info msg="RemovePodSandbox for \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\"" May 15 23:59:44.125141 containerd[1586]: time="2025-05-15T23:59:44.124900181Z" level=info msg="Forcibly stopping sandbox \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\"" May 15 23:59:44.125141 containerd[1586]: time="2025-05-15T23:59:44.124988437Z" level=info msg="TearDown network for sandbox \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\" successfully" May 15 23:59:44.525593 containerd[1586]: time="2025-05-15T23:59:44.525355728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 23:59:44.525593 containerd[1586]: time="2025-05-15T23:59:44.525454474Z" level=info msg="RemovePodSandbox \"f1c62c579c567f46664e03a7ac29f8cdb9ed4a35be285c330fb54988ee5e729f\" returns successfully" May 15 23:59:44.526117 containerd[1586]: time="2025-05-15T23:59:44.526082988Z" level=info msg="StopPodSandbox for \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\"" May 15 23:59:44.526204 containerd[1586]: time="2025-05-15T23:59:44.526185071Z" level=info msg="TearDown network for sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" successfully" May 15 23:59:44.526325 containerd[1586]: time="2025-05-15T23:59:44.526220007Z" level=info msg="StopPodSandbox for \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" returns successfully" May 15 23:59:44.526673 containerd[1586]: time="2025-05-15T23:59:44.526610042Z" level=info msg="RemovePodSandbox for \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\"" May 15 23:59:44.526673 containerd[1586]: time="2025-05-15T23:59:44.526655879Z" level=info msg="Forcibly stopping sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\"" May 15 23:59:44.526940 containerd[1586]: time="2025-05-15T23:59:44.526889218Z" level=info msg="TearDown network for sandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" successfully" May 15 23:59:44.631458 containerd[1586]: time="2025-05-15T23:59:44.631362368Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 23:59:44.631458 containerd[1586]: time="2025-05-15T23:59:44.631456887Z" level=info msg="RemovePodSandbox \"60ad61f86e8a5907a68defcddbae626253a19b0551d234cb2959e0c80dafa54f\" returns successfully" May 15 23:59:44.985360 sshd[4754]: Connection closed by 10.0.0.1 port 41410 May 15 23:59:44.985691 sshd-session[4745]: pam_unix(sshd:session): session closed for user core May 15 23:59:44.990380 systemd[1]: sshd@32-10.0.0.111:22-10.0.0.1:41410.service: Deactivated successfully. May 15 23:59:44.994649 systemd-logind[1570]: Session 33 logged out. Waiting for processes to exit. May 15 23:59:44.995085 systemd[1]: session-33.scope: Deactivated successfully. May 15 23:59:44.997346 systemd-logind[1570]: Removed session 33.