Jul 14 21:20:17.198999 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 19:42:33 -00 2025 Jul 14 21:20:17.199026 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=742d9dd12838875ad52f45b8bdfc5d537979ff4a28fba2a9d17a1c5d96555ab8 Jul 14 21:20:17.199041 kernel: BIOS-provided physical RAM map: Jul 14 21:20:17.199050 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 14 21:20:17.199058 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 14 21:20:17.199067 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 14 21:20:17.199077 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 14 21:20:17.199087 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 14 21:20:17.199095 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 14 21:20:17.199104 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 14 21:20:17.199113 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 14 21:20:17.199124 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 14 21:20:17.199150 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 14 21:20:17.199166 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 14 21:20:17.199192 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 14 21:20:17.199200 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 14 21:20:17.199211 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jul 14 21:20:17.199218 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jul 14 21:20:17.199225 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jul 14 21:20:17.199243 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jul 14 21:20:17.199250 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 14 21:20:17.199257 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 14 21:20:17.199265 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 14 21:20:17.199272 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 14 21:20:17.199280 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 14 21:20:17.199287 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 14 21:20:17.199294 kernel: NX (Execute Disable) protection: active Jul 14 21:20:17.199308 kernel: APIC: Static calls initialized Jul 14 21:20:17.199315 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jul 14 21:20:17.199323 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jul 14 21:20:17.199330 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jul 14 21:20:17.199337 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jul 14 21:20:17.199344 kernel: extended physical RAM map: Jul 14 21:20:17.199351 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 14 21:20:17.199359 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 14 21:20:17.199366 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 14 21:20:17.199373 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 14 21:20:17.199380 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 14 21:20:17.199388 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 14 21:20:17.199398 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 14 21:20:17.199410 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jul 14 21:20:17.199417 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jul 14 21:20:17.199424 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jul 14 21:20:17.199433 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jul 14 21:20:17.199442 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jul 14 21:20:17.199459 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 14 21:20:17.199469 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 14 21:20:17.199479 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 14 21:20:17.199488 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 14 21:20:17.199498 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 14 21:20:17.199507 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jul 14 21:20:17.199517 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jul 14 21:20:17.199526 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jul 14 21:20:17.199534 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jul 14 21:20:17.199546 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 14 21:20:17.199555 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 14 21:20:17.199563 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 14 21:20:17.199586 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 14 21:20:17.199598 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 14 21:20:17.199608 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 14 21:20:17.199618 kernel: efi: EFI v2.7 by EDK II Jul 14 21:20:17.199628 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jul 14 21:20:17.199637 kernel: random: crng init done Jul 14 21:20:17.199647 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 14 21:20:17.199657 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 14 21:20:17.199669 kernel: secureboot: Secure boot disabled Jul 14 21:20:17.199684 kernel: SMBIOS 2.8 present. Jul 14 21:20:17.199694 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 14 21:20:17.199704 kernel: Hypervisor detected: KVM Jul 14 21:20:17.199714 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 14 21:20:17.199724 kernel: kvm-clock: using sched offset of 4386891122 cycles Jul 14 21:20:17.199734 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 14 21:20:17.199744 kernel: tsc: Detected 2794.748 MHz processor Jul 14 21:20:17.199754 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 21:20:17.199764 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 21:20:17.199772 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 14 21:20:17.199785 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 14 21:20:17.199794 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 21:20:17.199804 kernel: Using GB pages for direct mapping Jul 14 21:20:17.199816 kernel: ACPI: Early table checksum verification disabled Jul 14 21:20:17.199827 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 14 21:20:17.199838 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 14 21:20:17.199849 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:20:17.199861 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:20:17.199872 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 14 21:20:17.199886 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:20:17.199897 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:20:17.199908 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:20:17.199920 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:20:17.199931 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 14 21:20:17.199942 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 14 21:20:17.199955 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 14 21:20:17.199967 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 14 21:20:17.199978 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 14 21:20:17.199992 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 14 21:20:17.200001 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 14 21:20:17.200010 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 14 21:20:17.200020 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 14 21:20:17.200030 kernel: No NUMA configuration found Jul 14 21:20:17.200040 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 14 21:20:17.200050 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jul 14 21:20:17.200060 kernel: Zone ranges: Jul 14 21:20:17.200070 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 21:20:17.200083 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 14 21:20:17.200093 kernel: Normal empty Jul 14 21:20:17.200107 kernel: Movable zone start for each node Jul 14 21:20:17.200117 kernel: Early memory node ranges Jul 14 21:20:17.200127 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 14 21:20:17.200319 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 14 21:20:17.200332 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 14 21:20:17.200342 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 14 21:20:17.200353 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 14 21:20:17.200390 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 14 21:20:17.200409 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jul 14 21:20:17.200419 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jul 14 21:20:17.200429 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 14 21:20:17.200439 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 21:20:17.200472 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 14 21:20:17.200499 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 14 21:20:17.200512 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 21:20:17.200522 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 14 21:20:17.200533 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 14 21:20:17.200543 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 14 21:20:17.200557 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 14 21:20:17.200584 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 14 21:20:17.200594 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 14 21:20:17.200604 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 14 21:20:17.200614 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 14 21:20:17.200624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 14 21:20:17.200638 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 14 21:20:17.200648 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 14 21:20:17.200658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 14 21:20:17.200668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 14 21:20:17.200678 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 21:20:17.200688 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 14 21:20:17.200697 kernel: TSC deadline timer available Jul 14 21:20:17.200707 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 14 21:20:17.200717 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 14 21:20:17.200731 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 14 21:20:17.200740 kernel: kvm-guest: setup PV sched yield Jul 14 21:20:17.200751 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 14 21:20:17.200760 kernel: Booting paravirtualized kernel on KVM Jul 14 21:20:17.200771 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 21:20:17.200781 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 14 21:20:17.200791 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 14 21:20:17.200801 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 14 21:20:17.200810 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 14 21:20:17.200823 kernel: kvm-guest: PV spinlocks enabled Jul 14 21:20:17.200833 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 14 21:20:17.200845 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=742d9dd12838875ad52f45b8bdfc5d537979ff4a28fba2a9d17a1c5d96555ab8 Jul 14 21:20:17.200855 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 21:20:17.200865 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 21:20:17.200879 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 21:20:17.200889 kernel: Fallback order for Node 0: 0 Jul 14 21:20:17.200898 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jul 14 21:20:17.200911 kernel: Policy zone: DMA32 Jul 14 21:20:17.200921 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 21:20:17.200932 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43492K init, 1584K bss, 177824K reserved, 0K cma-reserved) Jul 14 21:20:17.200942 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 21:20:17.200952 kernel: ftrace: allocating 37944 entries in 149 pages Jul 14 21:20:17.200962 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 21:20:17.200973 kernel: Dynamic Preempt: voluntary Jul 14 21:20:17.200983 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 21:20:17.200994 kernel: rcu: RCU event tracing is enabled. Jul 14 21:20:17.201008 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 21:20:17.201018 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 21:20:17.201029 kernel: Rude variant of Tasks RCU enabled. Jul 14 21:20:17.201039 kernel: Tracing variant of Tasks RCU enabled. Jul 14 21:20:17.201049 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 21:20:17.201060 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 21:20:17.201069 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 14 21:20:17.201078 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 21:20:17.201087 kernel: Console: colour dummy device 80x25 Jul 14 21:20:17.201100 kernel: printk: console [ttyS0] enabled Jul 14 21:20:17.201109 kernel: ACPI: Core revision 20230628 Jul 14 21:20:17.201119 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 14 21:20:17.201129 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 21:20:17.201138 kernel: x2apic enabled Jul 14 21:20:17.201148 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 21:20:17.201161 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 14 21:20:17.201172 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 14 21:20:17.201182 kernel: kvm-guest: setup PV IPIs Jul 14 21:20:17.201196 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 21:20:17.201207 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 14 21:20:17.201217 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 14 21:20:17.201227 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 14 21:20:17.201252 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 14 21:20:17.201263 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 14 21:20:17.201274 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 21:20:17.201284 kernel: Spectre V2 : Mitigation: Retpolines Jul 14 21:20:17.201294 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 14 21:20:17.201307 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 14 21:20:17.201317 kernel: RETBleed: Mitigation: untrained return thunk Jul 14 21:20:17.201327 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 21:20:17.201337 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 21:20:17.201346 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 14 21:20:17.201357 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 14 21:20:17.201366 kernel: x86/bugs: return thunk changed Jul 14 21:20:17.201378 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 14 21:20:17.201388 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 21:20:17.201402 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 21:20:17.201412 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 21:20:17.201422 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 21:20:17.201432 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 21:20:17.201443 kernel: Freeing SMP alternatives memory: 32K Jul 14 21:20:17.201453 kernel: pid_max: default: 32768 minimum: 301 Jul 14 21:20:17.201463 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 21:20:17.201473 kernel: landlock: Up and running. Jul 14 21:20:17.201486 kernel: SELinux: Initializing. Jul 14 21:20:17.201496 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:20:17.201507 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:20:17.201517 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 14 21:20:17.201527 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:20:17.201538 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:20:17.201548 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:20:17.201559 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 14 21:20:17.201584 kernel: ... version: 0 Jul 14 21:20:17.201597 kernel: ... bit width: 48 Jul 14 21:20:17.201607 kernel: ... generic registers: 6 Jul 14 21:20:17.201616 kernel: ... value mask: 0000ffffffffffff Jul 14 21:20:17.201625 kernel: ... max period: 00007fffffffffff Jul 14 21:20:17.201634 kernel: ... fixed-purpose events: 0 Jul 14 21:20:17.201644 kernel: ... event mask: 000000000000003f Jul 14 21:20:17.201653 kernel: signal: max sigframe size: 1776 Jul 14 21:20:17.201663 kernel: rcu: Hierarchical SRCU implementation. Jul 14 21:20:17.201672 kernel: rcu: Max phase no-delay instances is 400. Jul 14 21:20:17.201685 kernel: smp: Bringing up secondary CPUs ... Jul 14 21:20:17.201695 kernel: smpboot: x86: Booting SMP configuration: Jul 14 21:20:17.201704 kernel: .... node #0, CPUs: #1 #2 #3 Jul 14 21:20:17.201715 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 21:20:17.201726 kernel: smpboot: Max logical packages: 1 Jul 14 21:20:17.201738 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 14 21:20:17.201751 kernel: devtmpfs: initialized Jul 14 21:20:17.201764 kernel: x86/mm: Memory block size: 128MB Jul 14 21:20:17.201778 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 14 21:20:17.201791 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 14 21:20:17.201808 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 14 21:20:17.201821 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 14 21:20:17.201834 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jul 14 21:20:17.201845 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 14 21:20:17.201855 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 21:20:17.201865 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 21:20:17.201875 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 21:20:17.201886 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 21:20:17.201899 kernel: audit: initializing netlink subsys (disabled) Jul 14 21:20:17.201908 kernel: audit: type=2000 audit(1752528014.546:1): state=initialized audit_enabled=0 res=1 Jul 14 21:20:17.201916 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 21:20:17.201923 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 21:20:17.201931 kernel: cpuidle: using governor menu Jul 14 21:20:17.201939 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 21:20:17.201948 kernel: dca service started, version 1.12.1 Jul 14 21:20:17.201956 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jul 14 21:20:17.201964 kernel: PCI: Using configuration type 1 for base access Jul 14 21:20:17.201975 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 21:20:17.201983 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 21:20:17.201991 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 21:20:17.201999 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 21:20:17.202007 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 21:20:17.202015 kernel: ACPI: Added _OSI(Module Device) Jul 14 21:20:17.202023 kernel: ACPI: Added _OSI(Processor Device) Jul 14 21:20:17.202031 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 21:20:17.202039 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 21:20:17.202049 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 21:20:17.202057 kernel: ACPI: Interpreter enabled Jul 14 21:20:17.202065 kernel: ACPI: PM: (supports S0 S3 S5) Jul 14 21:20:17.202075 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 21:20:17.202086 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 21:20:17.202096 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 21:20:17.202107 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 14 21:20:17.202118 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 21:20:17.202407 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 21:20:17.202605 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 14 21:20:17.202746 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 14 21:20:17.202757 kernel: PCI host bridge to bus 0000:00 Jul 14 21:20:17.202907 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 21:20:17.203032 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 14 21:20:17.203155 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 21:20:17.203298 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 14 21:20:17.203420 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 14 21:20:17.203539 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 14 21:20:17.203847 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 21:20:17.204041 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 14 21:20:17.204218 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 14 21:20:17.204384 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 14 21:20:17.204525 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 14 21:20:17.204674 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 14 21:20:17.204806 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 14 21:20:17.204952 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 21:20:17.205133 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 21:20:17.205281 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 14 21:20:17.205415 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 14 21:20:17.205553 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jul 14 21:20:17.205732 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 14 21:20:17.205872 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 14 21:20:17.206003 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 14 21:20:17.206133 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jul 14 21:20:17.206290 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 14 21:20:17.206432 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 14 21:20:17.206577 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 14 21:20:17.206716 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 14 21:20:17.206849 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 14 21:20:17.206997 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 14 21:20:17.207131 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 14 21:20:17.207310 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 14 21:20:17.207450 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 14 21:20:17.207636 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 14 21:20:17.207792 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 14 21:20:17.207923 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 14 21:20:17.207934 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 14 21:20:17.207943 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 14 21:20:17.207951 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 21:20:17.207960 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 14 21:20:17.207973 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 14 21:20:17.207981 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 14 21:20:17.207990 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 14 21:20:17.207998 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 14 21:20:17.208006 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 14 21:20:17.208014 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 14 21:20:17.208022 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 14 21:20:17.208031 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 14 21:20:17.208039 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 14 21:20:17.208050 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 14 21:20:17.208058 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 14 21:20:17.208066 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 14 21:20:17.208074 kernel: iommu: Default domain type: Translated Jul 14 21:20:17.208082 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 21:20:17.208090 kernel: efivars: Registered efivars operations Jul 14 21:20:17.208099 kernel: PCI: Using ACPI for IRQ routing Jul 14 21:20:17.208107 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 21:20:17.208115 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 14 21:20:17.208126 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 14 21:20:17.208134 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jul 14 21:20:17.208142 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jul 14 21:20:17.208150 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 14 21:20:17.208158 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 14 21:20:17.208166 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jul 14 21:20:17.208174 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 14 21:20:17.208314 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 14 21:20:17.208449 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 14 21:20:17.208594 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 21:20:17.208606 kernel: vgaarb: loaded Jul 14 21:20:17.208614 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 14 21:20:17.208623 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 14 21:20:17.208631 kernel: clocksource: Switched to clocksource kvm-clock Jul 14 21:20:17.208639 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 21:20:17.208647 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 21:20:17.208656 kernel: pnp: PnP ACPI init Jul 14 21:20:17.208825 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 14 21:20:17.208838 kernel: pnp: PnP ACPI: found 6 devices Jul 14 21:20:17.208847 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 21:20:17.208855 kernel: NET: Registered PF_INET protocol family Jul 14 21:20:17.208863 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 21:20:17.208892 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 21:20:17.208903 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 21:20:17.208912 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 21:20:17.208923 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 21:20:17.208932 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 21:20:17.208940 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:20:17.208949 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:20:17.208957 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 21:20:17.208966 kernel: NET: Registered PF_XDP protocol family Jul 14 21:20:17.209102 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 14 21:20:17.209243 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 14 21:20:17.209372 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 14 21:20:17.209494 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 14 21:20:17.209629 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 14 21:20:17.209756 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 14 21:20:17.209886 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 14 21:20:17.210012 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 14 21:20:17.210023 kernel: PCI: CLS 0 bytes, default 64 Jul 14 21:20:17.210031 kernel: Initialise system trusted keyrings Jul 14 21:20:17.210044 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 21:20:17.210052 kernel: Key type asymmetric registered Jul 14 21:20:17.210061 kernel: Asymmetric key parser 'x509' registered Jul 14 21:20:17.210069 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 21:20:17.210078 kernel: io scheduler mq-deadline registered Jul 14 21:20:17.210089 kernel: io scheduler kyber registered Jul 14 21:20:17.210101 kernel: io scheduler bfq registered Jul 14 21:20:17.210113 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 21:20:17.210125 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 14 21:20:17.210136 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 14 21:20:17.210152 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 14 21:20:17.210163 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 21:20:17.210177 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 21:20:17.210186 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 14 21:20:17.210194 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 21:20:17.210206 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 21:20:17.210382 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 14 21:20:17.210402 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 21:20:17.210542 kernel: rtc_cmos 00:04: registered as rtc0 Jul 14 21:20:17.210689 kernel: rtc_cmos 00:04: setting system clock to 2025-07-14T21:20:16 UTC (1752528016) Jul 14 21:20:17.210818 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 14 21:20:17.210829 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 14 21:20:17.210838 kernel: efifb: probing for efifb Jul 14 21:20:17.210851 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 14 21:20:17.210860 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 14 21:20:17.210868 kernel: efifb: scrolling: redraw Jul 14 21:20:17.210877 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 14 21:20:17.210885 kernel: Console: switching to colour frame buffer device 160x50 Jul 14 21:20:17.210894 kernel: fb0: EFI VGA frame buffer device Jul 14 21:20:17.210902 kernel: pstore: Using crash dump compression: deflate Jul 14 21:20:17.210910 kernel: pstore: Registered efi_pstore as persistent store backend Jul 14 21:20:17.210919 kernel: NET: Registered PF_INET6 protocol family Jul 14 21:20:17.210930 kernel: Segment Routing with IPv6 Jul 14 21:20:17.210938 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 21:20:17.210946 kernel: NET: Registered PF_PACKET protocol family Jul 14 21:20:17.210955 kernel: Key type dns_resolver registered Jul 14 21:20:17.210963 kernel: IPI shorthand broadcast: enabled Jul 14 21:20:17.210972 kernel: sched_clock: Marking stable (1252004186, 442421747)->(3073975580, -1379549647) Jul 14 21:20:17.210980 kernel: registered taskstats version 1 Jul 14 21:20:17.210988 kernel: Loading compiled-in X.509 certificates Jul 14 21:20:17.211021 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: 1e62bd32d7a0d327f0f9cdb91eecd67282bbc369' Jul 14 21:20:17.211033 kernel: Key type .fscrypt registered Jul 14 21:20:17.211041 kernel: Key type fscrypt-provisioning registered Jul 14 21:20:17.211050 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 21:20:17.211058 kernel: ima: Allocated hash algorithm: sha1 Jul 14 21:20:17.211067 kernel: ima: No architecture policies found Jul 14 21:20:17.211075 kernel: clk: Disabling unused clocks Jul 14 21:20:17.211083 kernel: Freeing unused kernel image (initmem) memory: 43492K Jul 14 21:20:17.211092 kernel: Write protecting the kernel read-only data: 38912k Jul 14 21:20:17.211100 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jul 14 21:20:17.211111 kernel: Run /init as init process Jul 14 21:20:17.211119 kernel: with arguments: Jul 14 21:20:17.211128 kernel: /init Jul 14 21:20:17.211136 kernel: with environment: Jul 14 21:20:17.211144 kernel: HOME=/ Jul 14 21:20:17.211152 kernel: TERM=linux Jul 14 21:20:17.211160 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 21:20:17.211170 systemd[1]: Successfully made /usr/ read-only. Jul 14 21:20:17.211183 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 14 21:20:17.211195 systemd[1]: Detected virtualization kvm. Jul 14 21:20:17.211204 systemd[1]: Detected architecture x86-64. Jul 14 21:20:17.211213 systemd[1]: Running in initrd. Jul 14 21:20:17.211224 systemd[1]: No hostname configured, using default hostname. Jul 14 21:20:17.211247 systemd[1]: Hostname set to . Jul 14 21:20:17.211259 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:20:17.211270 systemd[1]: Queued start job for default target initrd.target. Jul 14 21:20:17.211287 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:20:17.211299 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:20:17.211312 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 21:20:17.211324 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:20:17.211335 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 21:20:17.211347 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 21:20:17.211360 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 21:20:17.211376 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 21:20:17.211387 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:20:17.211399 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:20:17.211409 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:20:17.211421 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:20:17.211433 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:20:17.211445 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:20:17.211457 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:20:17.211470 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:20:17.211479 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 21:20:17.211488 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 14 21:20:17.211496 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:20:17.211505 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:20:17.211514 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:20:17.211523 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:20:17.211532 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 21:20:17.211541 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:20:17.211552 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 21:20:17.211561 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 21:20:17.211648 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:20:17.211661 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:20:17.211672 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:20:17.211681 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 21:20:17.211690 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:20:17.211740 systemd-journald[193]: Collecting audit messages is disabled. Jul 14 21:20:17.211785 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 21:20:17.211800 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 21:20:17.211813 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:20:17.211825 systemd-journald[193]: Journal started Jul 14 21:20:17.211849 systemd-journald[193]: Runtime Journal (/run/log/journal/6c1e3f793b3e4ccabe8ac20ca3b1d2a6) is 6M, max 48.2M, 42.2M free. Jul 14 21:20:17.197586 systemd-modules-load[195]: Inserted module 'overlay' Jul 14 21:20:17.216524 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:20:17.217197 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:20:17.229607 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 21:20:17.232216 systemd-modules-load[195]: Inserted module 'br_netfilter' Jul 14 21:20:17.233362 kernel: Bridge firewalling registered Jul 14 21:20:17.234809 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:20:17.238210 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:20:17.269052 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:20:17.271124 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:20:17.276534 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:20:17.311805 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:20:17.314840 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:20:17.327737 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 21:20:17.329924 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:20:17.334516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:20:17.349975 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:20:17.352433 dracut-cmdline[228]: dracut-dracut-053 Jul 14 21:20:17.355341 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=742d9dd12838875ad52f45b8bdfc5d537979ff4a28fba2a9d17a1c5d96555ab8 Jul 14 21:20:17.423031 systemd-resolved[235]: Positive Trust Anchors: Jul 14 21:20:17.423050 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:20:17.423081 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:20:17.435385 systemd-resolved[235]: Defaulting to hostname 'linux'. Jul 14 21:20:17.437926 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:20:17.439255 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:20:17.453603 kernel: SCSI subsystem initialized Jul 14 21:20:17.463625 kernel: Loading iSCSI transport class v2.0-870. Jul 14 21:20:17.475637 kernel: iscsi: registered transport (tcp) Jul 14 21:20:17.533599 kernel: iscsi: registered transport (qla4xxx) Jul 14 21:20:17.533682 kernel: QLogic iSCSI HBA Driver Jul 14 21:20:17.588439 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 21:20:17.629853 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 21:20:17.659293 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 21:20:17.659391 kernel: device-mapper: uevent: version 1.0.3 Jul 14 21:20:17.659410 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 21:20:17.706621 kernel: raid6: avx2x4 gen() 26472 MB/s Jul 14 21:20:17.723621 kernel: raid6: avx2x2 gen() 24606 MB/s Jul 14 21:20:17.751063 kernel: raid6: avx2x1 gen() 23263 MB/s Jul 14 21:20:17.751166 kernel: raid6: using algorithm avx2x4 gen() 26472 MB/s Jul 14 21:20:17.768889 kernel: raid6: .... xor() 6607 MB/s, rmw enabled Jul 14 21:20:17.768992 kernel: raid6: using avx2x2 recovery algorithm Jul 14 21:20:17.826621 kernel: xor: automatically using best checksumming function avx Jul 14 21:20:18.015621 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 21:20:18.028481 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:20:18.077768 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:20:18.096858 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jul 14 21:20:18.103386 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:20:18.112752 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 21:20:18.129401 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jul 14 21:20:18.168290 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:20:18.181792 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:20:18.255122 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:20:18.290789 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 21:20:18.306899 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 21:20:18.362600 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 14 21:20:18.383724 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 21:20:18.383744 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 21:20:18.383926 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 21:20:18.383939 kernel: GPT:9289727 != 19775487 Jul 14 21:20:18.383950 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 21:20:18.383961 kernel: GPT:9289727 != 19775487 Jul 14 21:20:18.383972 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 21:20:18.383992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:20:18.384003 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 21:20:18.384013 kernel: AES CTR mode by8 optimization enabled Jul 14 21:20:18.371097 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:20:18.379803 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:20:18.381473 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:20:18.404593 kernel: libata version 3.00 loaded. Jul 14 21:20:18.408686 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 21:20:18.412650 kernel: ahci 0000:00:1f.2: version 3.0 Jul 14 21:20:18.412850 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 14 21:20:18.413777 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:20:18.416979 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 14 21:20:18.417230 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 14 21:20:18.413979 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:20:18.469437 kernel: scsi host0: ahci Jul 14 21:20:18.469695 kernel: scsi host1: ahci Jul 14 21:20:18.469877 kernel: scsi host2: ahci Jul 14 21:20:18.470040 kernel: scsi host3: ahci Jul 14 21:20:18.464813 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:20:18.490046 kernel: BTRFS: device fsid 20c828c6-fbdc-46fc-809f-d88391a18648 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (476) Jul 14 21:20:18.471856 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:20:18.494216 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (466) Jul 14 21:20:18.472103 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:20:18.473997 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:20:18.499900 kernel: scsi host4: ahci Jul 14 21:20:18.531334 kernel: scsi host5: ahci Jul 14 21:20:18.531554 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 14 21:20:18.531590 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 14 21:20:18.531616 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 14 21:20:18.499934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:20:18.537551 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 14 21:20:18.537589 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 14 21:20:18.537606 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 14 21:20:18.533174 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 14 21:20:18.533647 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:20:18.548172 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:20:18.566893 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 21:20:18.581048 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 21:20:18.600935 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 21:20:18.602439 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 21:20:18.618207 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:20:18.639795 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 21:20:18.671883 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:20:18.692611 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:20:18.860607 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 14 21:20:18.860694 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 14 21:20:18.861617 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 14 21:20:18.862625 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 14 21:20:18.863600 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 14 21:20:18.863624 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 14 21:20:18.864791 kernel: ata3.00: applying bridge limits Jul 14 21:20:18.865606 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 14 21:20:18.866607 kernel: ata3.00: configured for UDMA/100 Jul 14 21:20:18.868596 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 14 21:20:18.900777 disk-uuid[568]: Primary Header is updated. Jul 14 21:20:18.900777 disk-uuid[568]: Secondary Entries is updated. Jul 14 21:20:18.900777 disk-uuid[568]: Secondary Header is updated. Jul 14 21:20:18.904770 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:20:18.908141 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 14 21:20:18.908518 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 21:20:18.908605 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:20:18.922698 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 21:20:19.912618 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:20:19.912725 disk-uuid[579]: The operation has completed successfully. Jul 14 21:20:19.949461 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 21:20:19.949641 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 21:20:20.014838 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 21:20:20.019326 sh[595]: Success Jul 14 21:20:20.036601 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 14 21:20:20.085939 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 21:20:20.097443 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 21:20:20.101211 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 21:20:20.113533 kernel: BTRFS info (device dm-0): first mount of filesystem 20c828c6-fbdc-46fc-809f-d88391a18648 Jul 14 21:20:20.113623 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 21:20:20.113640 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 21:20:20.114667 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 21:20:20.116055 kernel: BTRFS info (device dm-0): using free space tree Jul 14 21:20:20.122168 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 21:20:20.124099 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 21:20:20.134872 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 21:20:20.138870 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 21:20:20.156954 kernel: BTRFS info (device vda6): first mount of filesystem d1c4625a-c440-4f55-aad0-93aafe26d63c Jul 14 21:20:20.157016 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 21:20:20.157028 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:20:20.160616 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:20:20.165604 kernel: BTRFS info (device vda6): last unmount of filesystem d1c4625a-c440-4f55-aad0-93aafe26d63c Jul 14 21:20:20.259086 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:20:20.285794 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:20:20.326621 systemd-networkd[771]: lo: Link UP Jul 14 21:20:20.326632 systemd-networkd[771]: lo: Gained carrier Jul 14 21:20:20.328501 systemd-networkd[771]: Enumeration completed Jul 14 21:20:20.328649 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:20:20.328933 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:20:20.328938 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:20:20.337194 systemd-networkd[771]: eth0: Link UP Jul 14 21:20:20.337198 systemd-networkd[771]: eth0: Gained carrier Jul 14 21:20:20.337205 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:20:20.337479 systemd[1]: Reached target network.target - Network. Jul 14 21:20:20.363647 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:20:20.416704 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 21:20:20.435740 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 21:20:20.558814 ignition[776]: Ignition 2.20.0 Jul 14 21:20:20.558827 ignition[776]: Stage: fetch-offline Jul 14 21:20:20.558878 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:20:20.558889 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:20:20.558988 ignition[776]: parsed url from cmdline: "" Jul 14 21:20:20.558992 ignition[776]: no config URL provided Jul 14 21:20:20.558998 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 21:20:20.559007 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jul 14 21:20:20.559044 ignition[776]: op(1): [started] loading QEMU firmware config module Jul 14 21:20:20.559049 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 21:20:20.569482 ignition[776]: op(1): [finished] loading QEMU firmware config module Jul 14 21:20:20.606510 ignition[776]: parsing config with SHA512: 3cd4b64d601610e7a57632e2b3001c3c32f804fc9a867c0204d349a48c67d58daddbf006ec5e4df9eb7b4e9f4d7f100e430efd5909aee98414703f2ee3b014e8 Jul 14 21:20:20.610671 unknown[776]: fetched base config from "system" Jul 14 21:20:20.610681 unknown[776]: fetched user config from "qemu" Jul 14 21:20:20.612880 ignition[776]: fetch-offline: fetch-offline passed Jul 14 21:20:20.613003 ignition[776]: Ignition finished successfully Jul 14 21:20:20.615500 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:20:20.618087 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 21:20:20.653835 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 21:20:20.699091 ignition[786]: Ignition 2.20.0 Jul 14 21:20:20.699103 ignition[786]: Stage: kargs Jul 14 21:20:20.699276 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:20:20.699288 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:20:20.725590 ignition[786]: kargs: kargs passed Jul 14 21:20:20.725647 ignition[786]: Ignition finished successfully Jul 14 21:20:20.730209 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 21:20:20.739791 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 21:20:20.759402 ignition[795]: Ignition 2.20.0 Jul 14 21:20:20.759414 ignition[795]: Stage: disks Jul 14 21:20:20.759583 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:20:20.759596 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:20:20.764548 ignition[795]: disks: disks passed Jul 14 21:20:20.764619 ignition[795]: Ignition finished successfully Jul 14 21:20:20.767556 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 21:20:20.768847 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 21:20:20.770626 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 21:20:20.772897 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:20:20.775064 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:20:20.776174 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:20:20.790869 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 21:20:20.870424 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 21:20:21.139253 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 21:20:21.170700 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 21:20:21.278618 kernel: EXT4-fs (vda9): mounted filesystem b14457c6-8295-431a-8b27-d773df8376ad r/w with ordered data mode. Quota mode: none. Jul 14 21:20:21.279723 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 21:20:21.281183 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 21:20:21.293707 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:20:21.295729 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 21:20:21.297001 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 21:20:21.297048 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 21:20:21.308945 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (814) Jul 14 21:20:21.308977 kernel: BTRFS info (device vda6): first mount of filesystem d1c4625a-c440-4f55-aad0-93aafe26d63c Jul 14 21:20:21.308992 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 21:20:21.309007 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:20:21.297072 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:20:21.312740 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:20:21.303597 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 21:20:21.310044 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 21:20:21.314201 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:20:21.362143 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 21:20:21.368036 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jul 14 21:20:21.373404 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 21:20:21.379335 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 21:20:21.492987 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 21:20:21.501299 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 21:20:21.503352 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 21:20:21.523341 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 21:20:21.525173 kernel: BTRFS info (device vda6): last unmount of filesystem d1c4625a-c440-4f55-aad0-93aafe26d63c Jul 14 21:20:21.537149 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 21:20:21.578724 ignition[932]: INFO : Ignition 2.20.0 Jul 14 21:20:21.578724 ignition[932]: INFO : Stage: mount Jul 14 21:20:21.580847 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:20:21.580847 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:20:21.580847 ignition[932]: INFO : mount: mount passed Jul 14 21:20:21.580847 ignition[932]: INFO : Ignition finished successfully Jul 14 21:20:21.586667 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 21:20:21.595889 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 21:20:22.281906 systemd-networkd[771]: eth0: Gained IPv6LL Jul 14 21:20:22.294978 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:20:22.307614 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (942) Jul 14 21:20:22.310464 kernel: BTRFS info (device vda6): first mount of filesystem d1c4625a-c440-4f55-aad0-93aafe26d63c Jul 14 21:20:22.310528 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 21:20:22.310557 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:20:22.315687 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:20:22.318311 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:20:22.352266 ignition[959]: INFO : Ignition 2.20.0 Jul 14 21:20:22.352266 ignition[959]: INFO : Stage: files Jul 14 21:20:22.354513 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:20:22.354513 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:20:22.357711 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jul 14 21:20:22.360649 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 21:20:22.360649 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 21:20:22.367020 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 21:20:22.369588 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 21:20:22.371730 unknown[959]: wrote ssh authorized keys file for user: core Jul 14 21:20:22.372998 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 21:20:22.375945 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 21:20:22.378751 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 14 21:20:22.420249 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 21:20:22.723904 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 21:20:22.723904 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 21:20:22.723904 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 14 21:20:22.842642 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 21:20:22.994311 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 21:20:22.994311 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 21:20:22.998116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 14 21:20:23.388845 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 21:20:23.992510 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 21:20:23.992510 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 14 21:20:24.364185 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:20:24.366523 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:20:24.366523 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 14 21:20:24.366523 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 14 21:20:24.370899 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:20:24.370899 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:20:24.370899 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 14 21:20:24.370899 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 21:20:24.398684 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:20:24.405269 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:20:24.406851 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 21:20:24.406851 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 14 21:20:24.406851 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 21:20:24.406851 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:20:24.406851 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:20:24.406851 ignition[959]: INFO : files: files passed Jul 14 21:20:24.406851 ignition[959]: INFO : Ignition finished successfully Jul 14 21:20:24.418924 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 21:20:24.427934 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 21:20:24.431316 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 21:20:24.437273 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 21:20:24.438372 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 21:20:24.441541 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 21:20:24.444871 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:20:24.444871 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:20:24.448201 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:20:24.452629 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:20:24.453096 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 21:20:24.464933 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 21:20:24.496612 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 21:20:24.496759 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 21:20:24.499258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 21:20:24.501516 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 21:20:24.501685 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 21:20:24.502699 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 21:20:24.564885 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:20:24.579052 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 21:20:24.591532 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:20:24.591768 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:20:24.611498 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 21:20:24.613958 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 21:20:24.614172 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:20:24.619250 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 21:20:24.619442 systemd[1]: Stopped target basic.target - Basic System. Jul 14 21:20:24.623481 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 21:20:24.624904 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:20:24.625316 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 21:20:24.631267 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 21:20:24.632624 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:20:24.634993 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 21:20:24.638928 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 21:20:24.641306 systemd[1]: Stopped target swap.target - Swaps. Jul 14 21:20:24.641437 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 21:20:24.641609 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:20:24.646690 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:20:24.649956 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:20:24.668421 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 21:20:24.669438 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:20:24.672099 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 21:20:24.673134 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 21:20:24.675435 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 21:20:24.676499 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:20:24.678885 systemd[1]: Stopped target paths.target - Path Units. Jul 14 21:20:24.680628 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 21:20:24.681769 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:20:24.684434 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 21:20:24.686268 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 21:20:24.688364 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 21:20:24.689398 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:20:24.691799 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 21:20:24.692857 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:20:24.695056 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 21:20:24.696224 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:20:24.698720 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 21:20:24.699680 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 21:20:24.713743 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 21:20:24.716431 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 21:20:24.718202 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 21:20:24.719299 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:20:24.721635 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 21:20:24.722721 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:20:24.729167 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 21:20:24.730215 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 21:20:24.742718 ignition[1014]: INFO : Ignition 2.20.0 Jul 14 21:20:24.742718 ignition[1014]: INFO : Stage: umount Jul 14 21:20:24.744756 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:20:24.744756 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:20:24.744756 ignition[1014]: INFO : umount: umount passed Jul 14 21:20:24.744756 ignition[1014]: INFO : Ignition finished successfully Jul 14 21:20:24.746108 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 21:20:24.753182 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 21:20:24.753378 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 21:20:24.755750 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 21:20:24.755905 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 21:20:24.758139 systemd[1]: Stopped target network.target - Network. Jul 14 21:20:24.760013 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 21:20:24.760157 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 21:20:24.762121 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 21:20:24.762230 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 21:20:24.764432 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 21:20:24.764523 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 21:20:24.766721 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 21:20:24.766808 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 21:20:24.768825 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 21:20:24.768921 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 21:20:24.771085 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 21:20:24.773000 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 21:20:24.781714 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 21:20:24.781897 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 21:20:24.787460 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 14 21:20:24.787863 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 21:20:24.788010 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 21:20:24.799966 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 14 21:20:24.801218 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 21:20:24.801321 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:20:24.814697 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 21:20:24.815682 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 21:20:24.815742 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:20:24.817908 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:20:24.817961 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:20:24.820196 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 21:20:24.820250 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 21:20:24.822431 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 21:20:24.822482 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:20:24.824639 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:20:24.826881 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 21:20:24.826952 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 14 21:20:24.835484 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 21:20:24.835653 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 21:20:24.841342 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 21:20:24.841534 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:20:24.843822 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 21:20:24.843872 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 21:20:24.845857 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 21:20:24.845898 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:20:24.847941 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 21:20:24.847995 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:20:24.850138 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 21:20:24.850194 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 21:20:24.852104 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:20:24.852159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:20:24.859727 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 21:20:24.860041 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 21:20:24.860111 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:20:24.864148 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 14 21:20:24.864204 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:20:24.866974 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 21:20:24.867048 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:20:24.869701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:20:24.869756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:20:24.873219 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 14 21:20:24.873286 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 14 21:20:24.873759 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 21:20:24.873897 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 21:20:24.876871 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 21:20:24.885895 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 21:20:24.895219 systemd[1]: Switching root. Jul 14 21:20:24.937153 systemd-journald[193]: Journal stopped Jul 14 21:20:27.377616 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 14 21:20:27.377733 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 21:20:27.377754 kernel: SELinux: policy capability open_perms=1 Jul 14 21:20:27.377777 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 21:20:27.377796 kernel: SELinux: policy capability always_check_network=0 Jul 14 21:20:27.377811 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 21:20:27.377827 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 21:20:27.377853 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 21:20:27.377869 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 21:20:27.377885 kernel: audit: type=1403 audit(1752528026.067:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 21:20:27.377910 systemd[1]: Successfully loaded SELinux policy in 49.505ms. Jul 14 21:20:27.377947 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.195ms. Jul 14 21:20:27.377980 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 14 21:20:27.377999 systemd[1]: Detected virtualization kvm. Jul 14 21:20:27.378015 systemd[1]: Detected architecture x86-64. Jul 14 21:20:27.378035 systemd[1]: Detected first boot. Jul 14 21:20:27.378051 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:20:27.378072 zram_generator::config[1062]: No configuration found. Jul 14 21:20:27.378097 kernel: Guest personality initialized and is inactive Jul 14 21:20:27.378114 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 14 21:20:27.378130 kernel: Initialized host personality Jul 14 21:20:27.379075 kernel: NET: Registered PF_VSOCK protocol family Jul 14 21:20:27.379099 systemd[1]: Populated /etc with preset unit settings. Jul 14 21:20:27.379117 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 14 21:20:27.379134 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 21:20:27.379150 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 21:20:27.379166 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 21:20:27.379195 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 21:20:27.379212 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 21:20:27.379237 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 21:20:27.379255 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 21:20:27.379272 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 21:20:27.379289 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 21:20:27.379308 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 21:20:27.379332 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 21:20:27.379361 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:20:27.379382 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:20:27.379400 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 21:20:27.379418 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 21:20:27.379435 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 21:20:27.379454 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:20:27.379471 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 14 21:20:27.379489 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:20:27.379515 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 21:20:27.379533 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 21:20:27.379551 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 21:20:27.379590 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 21:20:27.379609 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:20:27.379628 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:20:27.379654 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:20:27.379675 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:20:27.379698 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 21:20:27.379733 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 21:20:27.379753 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 14 21:20:27.379770 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:20:27.379787 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:20:27.379803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:20:27.379822 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 21:20:27.379839 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 21:20:27.379856 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 21:20:27.379872 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 21:20:27.379901 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 21:20:27.379918 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 21:20:27.379934 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 21:20:27.379960 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 21:20:27.379978 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 21:20:27.379995 systemd[1]: Reached target machines.target - Containers. Jul 14 21:20:27.380011 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 21:20:27.380029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:20:27.380055 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:20:27.380071 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 21:20:27.380088 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:20:27.380104 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:20:27.380120 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:20:27.380137 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 21:20:27.380161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:20:27.380178 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 21:20:27.380195 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 21:20:27.380222 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 21:20:27.380239 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 21:20:27.380256 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 21:20:27.380273 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 21:20:27.380294 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:20:27.380314 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:20:27.380330 kernel: loop: module loaded Jul 14 21:20:27.380346 kernel: fuse: init (API version 7.39) Jul 14 21:20:27.380363 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 21:20:27.380389 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 21:20:27.380407 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 14 21:20:27.380454 systemd-journald[1126]: Collecting audit messages is disabled. Jul 14 21:20:27.380486 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:20:27.380503 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 21:20:27.380521 systemd[1]: Stopped verity-setup.service. Jul 14 21:20:27.381924 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 21:20:27.381950 systemd-journald[1126]: Journal started Jul 14 21:20:27.381993 systemd-journald[1126]: Runtime Journal (/run/log/journal/6c1e3f793b3e4ccabe8ac20ca3b1d2a6) is 6M, max 48.2M, 42.2M free. Jul 14 21:20:27.038186 systemd[1]: Queued start job for default target multi-user.target. Jul 14 21:20:27.051476 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 21:20:27.052288 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 21:20:27.393752 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:20:27.395714 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 21:20:27.397001 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 21:20:27.415970 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 21:20:27.417428 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 21:20:27.418765 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 21:20:27.420095 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 21:20:27.421517 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:20:27.423225 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 21:20:27.423562 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 21:20:27.425373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:20:27.425889 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:20:27.427683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:20:27.428002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:20:27.429905 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 21:20:27.430228 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 21:20:27.433824 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:20:27.434169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:20:27.435900 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:20:27.437544 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 21:20:27.439260 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 21:20:27.456898 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 21:20:27.459536 kernel: ACPI: bus type drm_connector registered Jul 14 21:20:27.468894 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 21:20:27.476845 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 21:20:27.478315 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 21:20:27.478376 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:20:27.481258 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 14 21:20:27.486743 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 21:20:27.492667 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 21:20:27.494138 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:20:27.501425 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 21:20:27.508857 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 21:20:27.510373 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:20:27.513197 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 21:20:27.514445 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:20:27.522837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:20:27.527807 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 21:20:27.532148 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 21:20:27.540181 systemd-journald[1126]: Time spent on flushing to /var/log/journal/6c1e3f793b3e4ccabe8ac20ca3b1d2a6 is 28.178ms for 1054 entries. Jul 14 21:20:27.540181 systemd-journald[1126]: System Journal (/var/log/journal/6c1e3f793b3e4ccabe8ac20ca3b1d2a6) is 8M, max 195.6M, 187.6M free. Jul 14 21:20:27.596199 systemd-journald[1126]: Received client request to flush runtime journal. Jul 14 21:20:27.596247 kernel: loop0: detected capacity change from 0 to 221472 Jul 14 21:20:27.539240 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 21:20:27.542552 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:20:27.543925 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:20:27.549549 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 14 21:20:27.557071 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 21:20:27.559025 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 21:20:27.561415 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 21:20:27.564523 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 21:20:27.571128 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:20:27.582846 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 21:20:27.594942 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 14 21:20:27.601340 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 21:20:27.605533 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jul 14 21:20:27.605558 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jul 14 21:20:27.616439 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:20:27.618710 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 21:20:27.624892 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 21:20:27.627609 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:20:27.633396 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 21:20:27.649399 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 14 21:20:27.659593 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 21:20:27.661050 kernel: loop1: detected capacity change from 0 to 147912 Jul 14 21:20:27.670773 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 21:20:27.677848 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:20:27.709766 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jul 14 21:20:27.709819 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jul 14 21:20:27.712903 kernel: loop2: detected capacity change from 0 to 138176 Jul 14 21:20:27.718632 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:20:27.771806 kernel: loop3: detected capacity change from 0 to 221472 Jul 14 21:20:27.785608 kernel: loop4: detected capacity change from 0 to 147912 Jul 14 21:20:27.802605 kernel: loop5: detected capacity change from 0 to 138176 Jul 14 21:20:27.821005 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 21:20:27.821902 (sd-merge)[1210]: Merged extensions into '/usr'. Jul 14 21:20:27.830914 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 21:20:27.830933 systemd[1]: Reloading... Jul 14 21:20:27.916619 zram_generator::config[1235]: No configuration found. Jul 14 21:20:28.043749 ldconfig[1166]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 21:20:28.123684 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:20:28.227453 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 21:20:28.228693 systemd[1]: Reloading finished in 397 ms. Jul 14 21:20:28.388563 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 21:20:28.424545 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 21:20:28.444330 systemd[1]: Starting ensure-sysext.service... Jul 14 21:20:28.447332 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:20:28.479609 systemd[1]: Reload requested from client PID 1275 ('systemctl') (unit ensure-sysext.service)... Jul 14 21:20:28.479644 systemd[1]: Reloading... Jul 14 21:20:28.486225 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 21:20:28.486673 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 21:20:28.488083 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 21:20:28.488485 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Jul 14 21:20:28.488616 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Jul 14 21:20:28.493938 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:20:28.493951 systemd-tmpfiles[1276]: Skipping /boot Jul 14 21:20:28.509435 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:20:28.509610 systemd-tmpfiles[1276]: Skipping /boot Jul 14 21:20:28.562752 zram_generator::config[1308]: No configuration found. Jul 14 21:20:28.693077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:20:28.770194 systemd[1]: Reloading finished in 289 ms. Jul 14 21:20:28.782780 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 21:20:28.805372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:20:28.830292 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 21:20:28.834012 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 21:20:28.837408 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 21:20:28.845069 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:20:28.857120 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:20:28.861645 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 21:20:28.867240 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 21:20:28.867428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:20:28.870806 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:20:28.881883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:20:28.885397 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:20:28.886632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:20:28.886955 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 21:20:28.915004 augenrules[1372]: No rules Jul 14 21:20:28.890797 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Jul 14 21:20:28.891108 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 21:20:28.915551 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 21:20:28.918219 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:20:28.918781 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 21:20:28.921291 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 21:20:28.924170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:20:28.924787 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:20:28.926955 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:20:28.927436 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:20:28.929563 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:20:28.929938 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:20:28.943084 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 21:20:28.943438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:20:28.957999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:20:28.963378 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:20:28.974642 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:20:28.977765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:20:28.977946 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 21:20:28.986917 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 21:20:28.988257 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 21:20:28.989987 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:20:28.996170 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 21:20:28.998598 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 21:20:29.064608 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1389) Jul 14 21:20:29.100146 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:20:29.100435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:20:29.103409 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 21:20:29.107958 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:20:29.108194 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:20:29.110421 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 21:20:29.112886 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:20:29.113232 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:20:29.148564 systemd[1]: Finished ensure-sysext.service. Jul 14 21:20:29.158704 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 14 21:20:29.162221 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 14 21:20:29.164755 kernel: ACPI: button: Power Button [PWRF] Jul 14 21:20:29.196316 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 14 21:20:29.196883 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 14 21:20:29.197254 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 14 21:20:29.197606 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 14 21:20:29.191032 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:20:29.200697 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 14 21:20:29.204693 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 21:20:29.217584 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 21:20:29.219234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:20:29.222413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:20:29.226945 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:20:29.280531 systemd-resolved[1351]: Positive Trust Anchors: Jul 14 21:20:29.280886 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:20:29.281753 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:20:29.282926 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:20:29.288895 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:20:29.292947 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:20:29.296085 systemd-resolved[1351]: Defaulting to hostname 'linux'. Jul 14 21:20:29.298956 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 21:20:29.300484 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 21:20:29.303428 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:20:29.309916 augenrules[1427]: /sbin/augenrules: No change Jul 14 21:20:29.313075 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 21:20:29.314483 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:20:29.314536 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 21:20:29.315294 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:20:29.317465 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:20:29.317808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:20:29.325303 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:20:29.330744 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:20:29.331156 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:20:29.335522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:20:29.336025 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:20:29.340844 augenrules[1456]: No rules Jul 14 21:20:29.341336 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:20:29.341728 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:20:29.344624 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:20:29.344995 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 21:20:29.346907 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 21:20:29.362923 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:20:29.363135 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:20:29.423522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:20:29.483482 kernel: kvm_amd: TSC scaling supported Jul 14 21:20:29.483656 kernel: kvm_amd: Nested Virtualization enabled Jul 14 21:20:29.483681 kernel: kvm_amd: Nested Paging enabled Jul 14 21:20:29.483700 kernel: kvm_amd: LBR virtualization supported Jul 14 21:20:29.485632 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 14 21:20:29.485679 kernel: kvm_amd: Virtual GIF supported Jul 14 21:20:29.499387 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:20:29.499784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:20:29.504215 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 14 21:20:29.514631 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 21:20:29.533657 kernel: EDAC MC: Ver: 3.0.0 Jul 14 21:20:29.535025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:20:29.566920 systemd-networkd[1443]: lo: Link UP Jul 14 21:20:29.566944 systemd-networkd[1443]: lo: Gained carrier Jul 14 21:20:29.569761 systemd-networkd[1443]: Enumeration completed Jul 14 21:20:29.570020 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:20:29.571745 systemd[1]: Reached target network.target - Network. Jul 14 21:20:29.571773 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:20:29.571780 systemd-networkd[1443]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:20:29.573476 systemd-networkd[1443]: eth0: Link UP Jul 14 21:20:29.573481 systemd-networkd[1443]: eth0: Gained carrier Jul 14 21:20:29.573500 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:20:29.582986 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 14 21:20:29.586776 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 21:20:29.588463 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 21:20:29.588711 systemd-networkd[1443]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:20:29.596919 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 21:20:29.612806 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:20:29.615102 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 14 21:20:29.620054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:20:29.625132 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 21:20:30.307508 systemd-timesyncd[1445]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 21:20:30.307574 systemd-timesyncd[1445]: Initial clock synchronization to Mon 2025-07-14 21:20:30.307370 UTC. Jul 14 21:20:30.307624 systemd-resolved[1351]: Clock change detected. Flushing caches. Jul 14 21:20:30.308406 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 21:20:30.332014 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 21:20:30.333927 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:20:30.335304 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:20:30.336743 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 21:20:30.338258 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 21:20:30.340000 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 21:20:30.341712 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 21:20:30.343189 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 21:20:30.344531 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 21:20:30.344584 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:20:30.345672 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:20:30.347720 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 21:20:30.351084 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 21:20:30.356040 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 14 21:20:30.357967 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 14 21:20:30.359340 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 14 21:20:30.368735 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 21:20:30.370932 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 14 21:20:30.373806 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 21:20:30.375819 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 21:20:30.377208 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:20:30.378461 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:20:30.379706 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:20:30.379752 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:20:30.381496 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 21:20:30.384882 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 21:20:30.391395 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 21:20:30.394552 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 21:20:30.395870 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 21:20:30.397404 lvm[1488]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:20:30.400319 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 21:20:30.409875 jq[1491]: false Jul 14 21:20:30.410152 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 21:20:30.414336 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 21:20:30.419423 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 21:20:30.428174 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 21:20:30.431332 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 21:20:30.432237 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 21:20:30.433380 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 21:20:30.438119 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 21:20:30.442793 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 21:20:30.443306 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 21:20:30.443829 dbus-daemon[1490]: [system] SELinux support is enabled Jul 14 21:20:30.445783 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 21:20:30.447334 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 21:20:30.449474 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 21:20:30.456645 extend-filesystems[1492]: Found loop3 Jul 14 21:20:30.464249 extend-filesystems[1492]: Found loop4 Jul 14 21:20:30.464249 extend-filesystems[1492]: Found loop5 Jul 14 21:20:30.464249 extend-filesystems[1492]: Found sr0 Jul 14 21:20:30.464249 extend-filesystems[1492]: Found vda Jul 14 21:20:30.464249 extend-filesystems[1492]: Found vda1 Jul 14 21:20:30.464249 extend-filesystems[1492]: Found vda2 Jul 14 21:20:30.464249 extend-filesystems[1492]: Found vda3 Jul 14 21:20:30.464249 extend-filesystems[1492]: Found usr Jul 14 21:20:30.464249 extend-filesystems[1492]: Found vda4 Jul 14 21:20:30.464249 extend-filesystems[1492]: Found vda6 Jul 14 21:20:30.464249 extend-filesystems[1492]: Found vda7 Jul 14 21:20:30.464249 extend-filesystems[1492]: Found vda9 Jul 14 21:20:30.464249 extend-filesystems[1492]: Checking size of /dev/vda9 Jul 14 21:20:30.463550 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 21:20:30.463966 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 21:20:30.466509 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 21:20:30.486741 jq[1505]: true Jul 14 21:20:30.488641 update_engine[1504]: I20250714 21:20:30.483851 1504 main.cc:92] Flatcar Update Engine starting Jul 14 21:20:30.488963 extend-filesystems[1492]: Resized partition /dev/vda9 Jul 14 21:20:30.492541 extend-filesystems[1522]: resize2fs 1.47.1 (20-May-2024) Jul 14 21:20:30.497524 update_engine[1504]: I20250714 21:20:30.491488 1504 update_check_scheduler.cc:74] Next update check in 10m24s Jul 14 21:20:30.497767 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 21:20:30.504199 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 21:20:30.500291 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 21:20:30.500358 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 21:20:30.504665 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 21:20:30.504786 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 21:20:30.505545 jq[1521]: true Jul 14 21:20:30.516461 systemd[1]: Started update-engine.service - Update Engine. Jul 14 21:20:30.524518 tar[1507]: linux-amd64/helm Jul 14 21:20:30.538198 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1405) Jul 14 21:20:30.529322 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 21:20:30.609933 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 21:20:30.637699 locksmithd[1529]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 21:20:30.670351 systemd-logind[1501]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 21:20:30.670410 systemd-logind[1501]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 21:20:30.672452 systemd-logind[1501]: New seat seat0. Jul 14 21:20:30.674237 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 21:20:30.674237 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 21:20:30.674237 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 21:20:30.681074 extend-filesystems[1492]: Resized filesystem in /dev/vda9 Jul 14 21:20:30.675327 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 21:20:30.677848 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 21:20:30.681037 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 21:20:30.689844 bash[1547]: Updated "/home/core/.ssh/authorized_keys" Jul 14 21:20:30.693248 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 21:20:30.696938 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 21:20:30.984944 containerd[1514]: time="2025-07-14T21:20:30.984001214Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 14 21:20:30.985773 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 21:20:31.027459 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 21:20:31.049563 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 21:20:31.062870 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 21:20:31.063315 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 21:20:31.072206 containerd[1514]: time="2025-07-14T21:20:31.072099612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:20:31.075782 containerd[1514]: time="2025-07-14T21:20:31.075045207Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:20:31.075782 containerd[1514]: time="2025-07-14T21:20:31.075093086Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 21:20:31.075782 containerd[1514]: time="2025-07-14T21:20:31.075117743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 21:20:31.075782 containerd[1514]: time="2025-07-14T21:20:31.075413517Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 21:20:31.075782 containerd[1514]: time="2025-07-14T21:20:31.075440588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 21:20:31.075782 containerd[1514]: time="2025-07-14T21:20:31.075541237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:20:31.075782 containerd[1514]: time="2025-07-14T21:20:31.075556005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:20:31.076127 containerd[1514]: time="2025-07-14T21:20:31.075956656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:20:31.076127 containerd[1514]: time="2025-07-14T21:20:31.075979319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 21:20:31.076127 containerd[1514]: time="2025-07-14T21:20:31.075997593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:20:31.076127 containerd[1514]: time="2025-07-14T21:20:31.076010537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 21:20:31.076254 containerd[1514]: time="2025-07-14T21:20:31.076140191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:20:31.076888 containerd[1514]: time="2025-07-14T21:20:31.076509102Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:20:31.076888 containerd[1514]: time="2025-07-14T21:20:31.076762448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:20:31.076888 containerd[1514]: time="2025-07-14T21:20:31.076785200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 21:20:31.077008 containerd[1514]: time="2025-07-14T21:20:31.076961200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 21:20:31.077092 containerd[1514]: time="2025-07-14T21:20:31.077038606Z" level=info msg="metadata content store policy set" policy=shared Jul 14 21:20:31.084602 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.092592811Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.092702517Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.092725099Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.092749294Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.092770073Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.093082279Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.093447294Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.093614377Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.093637350Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.093658830Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.093677616Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.093698184Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.093715076Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 21:20:31.095133 containerd[1514]: time="2025-07-14T21:20:31.093732499Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.093749330Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.093771452Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.093788604Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.093803572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.093828789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.093846212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.093866199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.093886347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.093944466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.093964043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.093979292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.093996474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.094013706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.095616 containerd[1514]: time="2025-07-14T21:20:31.094032642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094048922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094066445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094082415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094102573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094153268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094173396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094187943Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094286428Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094311134Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094325862Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094344747Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094358964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094375755Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 21:20:31.096014 containerd[1514]: time="2025-07-14T21:20:31.094404659Z" level=info msg="NRI interface is disabled by configuration." Jul 14 21:20:31.096383 containerd[1514]: time="2025-07-14T21:20:31.094419918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 21:20:31.096416 containerd[1514]: time="2025-07-14T21:20:31.094816732Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 21:20:31.096416 containerd[1514]: time="2025-07-14T21:20:31.094875653Z" level=info msg="Connect containerd service" Jul 14 21:20:31.096416 containerd[1514]: time="2025-07-14T21:20:31.094950173Z" level=info msg="using legacy CRI server" Jul 14 21:20:31.096416 containerd[1514]: time="2025-07-14T21:20:31.094963137Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 21:20:31.096416 containerd[1514]: time="2025-07-14T21:20:31.095121845Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 21:20:31.099923 containerd[1514]: time="2025-07-14T21:20:31.099097692Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:20:31.099923 containerd[1514]: time="2025-07-14T21:20:31.099545302Z" level=info msg="Start subscribing containerd event" Jul 14 21:20:31.099923 containerd[1514]: time="2025-07-14T21:20:31.099595065Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 21:20:31.099923 containerd[1514]: time="2025-07-14T21:20:31.099605895Z" level=info msg="Start recovering state" Jul 14 21:20:31.099923 containerd[1514]: time="2025-07-14T21:20:31.099659236Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 21:20:31.099923 containerd[1514]: time="2025-07-14T21:20:31.099706785Z" level=info msg="Start event monitor" Jul 14 21:20:31.099923 containerd[1514]: time="2025-07-14T21:20:31.099730509Z" level=info msg="Start snapshots syncer" Jul 14 21:20:31.099923 containerd[1514]: time="2025-07-14T21:20:31.099739807Z" level=info msg="Start cni network conf syncer for default" Jul 14 21:20:31.099923 containerd[1514]: time="2025-07-14T21:20:31.099752390Z" level=info msg="Start streaming server" Jul 14 21:20:31.100646 containerd[1514]: time="2025-07-14T21:20:31.100115441Z" level=info msg="containerd successfully booted in 0.127817s" Jul 14 21:20:31.099987 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 21:20:31.110777 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 21:20:31.119402 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 21:20:31.123166 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 14 21:20:31.124732 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 21:20:31.223064 tar[1507]: linux-amd64/LICENSE Jul 14 21:20:31.223239 tar[1507]: linux-amd64/README.md Jul 14 21:20:31.249094 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 21:20:32.086313 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 21:20:32.091109 systemd[1]: Started sshd@0-10.0.0.72:22-10.0.0.1:35546.service - OpenSSH per-connection server daemon (10.0.0.1:35546). Jul 14 21:20:32.163970 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 35546 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:20:32.167147 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:20:32.177187 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 21:20:32.178054 systemd-networkd[1443]: eth0: Gained IPv6LL Jul 14 21:20:32.192461 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 21:20:32.195049 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 21:20:32.206778 systemd-logind[1501]: New session 1 of user core. Jul 14 21:20:32.207270 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 21:20:32.225620 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 21:20:32.229819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:20:32.232720 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 21:20:32.240112 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 21:20:32.253142 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 21:20:32.264084 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 21:20:32.264457 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 21:20:32.265108 (systemd)[1595]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:20:32.268101 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 21:20:32.269455 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 21:20:32.273259 systemd-logind[1501]: New session c1 of user core. Jul 14 21:20:32.479771 systemd[1595]: Queued start job for default target default.target. Jul 14 21:20:32.501828 systemd[1595]: Created slice app.slice - User Application Slice. Jul 14 21:20:32.501873 systemd[1595]: Reached target paths.target - Paths. Jul 14 21:20:32.501961 systemd[1595]: Reached target timers.target - Timers. Jul 14 21:20:32.504064 systemd[1595]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 21:20:32.522405 systemd[1595]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 21:20:32.522606 systemd[1595]: Reached target sockets.target - Sockets. Jul 14 21:20:32.522670 systemd[1595]: Reached target basic.target - Basic System. Jul 14 21:20:32.522739 systemd[1595]: Reached target default.target - Main User Target. Jul 14 21:20:32.522798 systemd[1595]: Startup finished in 241ms. Jul 14 21:20:32.523224 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 21:20:32.540266 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 21:20:32.608080 systemd[1]: Started sshd@1-10.0.0.72:22-10.0.0.1:35550.service - OpenSSH per-connection server daemon (10.0.0.1:35550). Jul 14 21:20:32.671376 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 35550 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:20:32.673753 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:20:32.679887 systemd-logind[1501]: New session 2 of user core. Jul 14 21:20:32.695229 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 21:20:32.838666 sshd[1616]: Connection closed by 10.0.0.1 port 35550 Jul 14 21:20:32.839212 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Jul 14 21:20:32.860186 systemd[1]: sshd@1-10.0.0.72:22-10.0.0.1:35550.service: Deactivated successfully. Jul 14 21:20:32.862683 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 21:20:32.864681 systemd-logind[1501]: Session 2 logged out. Waiting for processes to exit. Jul 14 21:20:32.866330 systemd[1]: Started sshd@2-10.0.0.72:22-10.0.0.1:35564.service - OpenSSH per-connection server daemon (10.0.0.1:35564). Jul 14 21:20:32.869597 systemd-logind[1501]: Removed session 2. Jul 14 21:20:32.912603 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 35564 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:20:32.914813 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:20:32.921342 systemd-logind[1501]: New session 3 of user core. Jul 14 21:20:32.936187 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 21:20:32.997588 sshd[1624]: Connection closed by 10.0.0.1 port 35564 Jul 14 21:20:32.998062 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Jul 14 21:20:33.002960 systemd[1]: sshd@2-10.0.0.72:22-10.0.0.1:35564.service: Deactivated successfully. Jul 14 21:20:33.005337 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 21:20:33.006471 systemd-logind[1501]: Session 3 logged out. Waiting for processes to exit. Jul 14 21:20:33.008048 systemd-logind[1501]: Removed session 3. Jul 14 21:20:33.851808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:20:33.853703 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 21:20:33.857017 systemd[1]: Startup finished in 1.457s (kernel) + 9.220s (initrd) + 7.156s (userspace) = 17.834s. Jul 14 21:20:33.859387 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:20:34.615496 kubelet[1634]: E0714 21:20:34.615388 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:20:34.620353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:20:34.620591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:20:34.621078 systemd[1]: kubelet.service: Consumed 2.122s CPU time, 266.1M memory peak. Jul 14 21:20:43.011119 systemd[1]: Started sshd@3-10.0.0.72:22-10.0.0.1:45668.service - OpenSSH per-connection server daemon (10.0.0.1:45668). Jul 14 21:20:43.050732 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 45668 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:20:43.052367 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:20:43.057322 systemd-logind[1501]: New session 4 of user core. Jul 14 21:20:43.071083 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 21:20:43.127944 sshd[1649]: Connection closed by 10.0.0.1 port 45668 Jul 14 21:20:43.128401 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Jul 14 21:20:43.141782 systemd[1]: sshd@3-10.0.0.72:22-10.0.0.1:45668.service: Deactivated successfully. Jul 14 21:20:43.143717 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 21:20:43.145524 systemd-logind[1501]: Session 4 logged out. Waiting for processes to exit. Jul 14 21:20:43.154191 systemd[1]: Started sshd@4-10.0.0.72:22-10.0.0.1:45672.service - OpenSSH per-connection server daemon (10.0.0.1:45672). Jul 14 21:20:43.155071 systemd-logind[1501]: Removed session 4. Jul 14 21:20:43.194673 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 45672 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:20:43.196263 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:20:43.201337 systemd-logind[1501]: New session 5 of user core. Jul 14 21:20:43.212041 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 21:20:43.262339 sshd[1657]: Connection closed by 10.0.0.1 port 45672 Jul 14 21:20:43.262656 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Jul 14 21:20:43.275923 systemd[1]: sshd@4-10.0.0.72:22-10.0.0.1:45672.service: Deactivated successfully. Jul 14 21:20:43.278301 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 21:20:43.280108 systemd-logind[1501]: Session 5 logged out. Waiting for processes to exit. Jul 14 21:20:43.281692 systemd[1]: Started sshd@5-10.0.0.72:22-10.0.0.1:45686.service - OpenSSH per-connection server daemon (10.0.0.1:45686). Jul 14 21:20:43.282740 systemd-logind[1501]: Removed session 5. Jul 14 21:20:43.322686 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 45686 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:20:43.324377 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:20:43.329076 systemd-logind[1501]: New session 6 of user core. Jul 14 21:20:43.339041 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 21:20:43.394261 sshd[1665]: Connection closed by 10.0.0.1 port 45686 Jul 14 21:20:43.394603 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Jul 14 21:20:43.406327 systemd[1]: sshd@5-10.0.0.72:22-10.0.0.1:45686.service: Deactivated successfully. Jul 14 21:20:43.408682 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 21:20:43.410573 systemd-logind[1501]: Session 6 logged out. Waiting for processes to exit. Jul 14 21:20:43.422303 systemd[1]: Started sshd@6-10.0.0.72:22-10.0.0.1:45692.service - OpenSSH per-connection server daemon (10.0.0.1:45692). Jul 14 21:20:43.423454 systemd-logind[1501]: Removed session 6. Jul 14 21:20:43.459328 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 45692 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:20:43.461003 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:20:43.466159 systemd-logind[1501]: New session 7 of user core. Jul 14 21:20:43.480226 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 21:20:43.875543 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 21:20:43.876042 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:20:43.897002 sudo[1674]: pam_unix(sudo:session): session closed for user root Jul 14 21:20:43.898630 sshd[1673]: Connection closed by 10.0.0.1 port 45692 Jul 14 21:20:43.899130 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Jul 14 21:20:43.911567 systemd[1]: sshd@6-10.0.0.72:22-10.0.0.1:45692.service: Deactivated successfully. Jul 14 21:20:43.913357 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 21:20:43.914767 systemd-logind[1501]: Session 7 logged out. Waiting for processes to exit. Jul 14 21:20:43.916329 systemd[1]: Started sshd@7-10.0.0.72:22-10.0.0.1:45702.service - OpenSSH per-connection server daemon (10.0.0.1:45702). Jul 14 21:20:43.917055 systemd-logind[1501]: Removed session 7. Jul 14 21:20:43.957787 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 45702 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:20:43.959908 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:20:43.965314 systemd-logind[1501]: New session 8 of user core. Jul 14 21:20:43.975151 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 21:20:44.032828 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 21:20:44.033241 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:20:44.038332 sudo[1684]: pam_unix(sudo:session): session closed for user root Jul 14 21:20:44.046938 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 14 21:20:44.047358 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:20:44.069439 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 21:20:44.109228 augenrules[1706]: No rules Jul 14 21:20:44.112028 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:20:44.112481 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 21:20:44.113967 sudo[1683]: pam_unix(sudo:session): session closed for user root Jul 14 21:20:44.116001 sshd[1682]: Connection closed by 10.0.0.1 port 45702 Jul 14 21:20:44.116422 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Jul 14 21:20:44.130146 systemd[1]: sshd@7-10.0.0.72:22-10.0.0.1:45702.service: Deactivated successfully. Jul 14 21:20:44.132594 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 21:20:44.134268 systemd-logind[1501]: Session 8 logged out. Waiting for processes to exit. Jul 14 21:20:44.149247 systemd[1]: Started sshd@8-10.0.0.72:22-10.0.0.1:45716.service - OpenSSH per-connection server daemon (10.0.0.1:45716). Jul 14 21:20:44.150561 systemd-logind[1501]: Removed session 8. Jul 14 21:20:44.185461 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 45716 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:20:44.187257 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:20:44.193026 systemd-logind[1501]: New session 9 of user core. Jul 14 21:20:44.203085 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 21:20:44.263099 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 21:20:44.263598 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:20:44.610228 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 21:20:44.610370 (dockerd)[1738]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 21:20:44.768776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 21:20:44.781372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:20:44.914609 dockerd[1738]: time="2025-07-14T21:20:44.914413546Z" level=info msg="Starting up" Jul 14 21:20:45.133311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:20:45.139613 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:20:45.330783 kubelet[1770]: E0714 21:20:45.330720 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:20:45.339021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:20:45.339329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:20:45.339872 systemd[1]: kubelet.service: Consumed 300ms CPU time, 111.3M memory peak. Jul 14 21:20:45.429225 dockerd[1738]: time="2025-07-14T21:20:45.429140707Z" level=info msg="Loading containers: start." Jul 14 21:20:45.787967 kernel: Initializing XFRM netlink socket Jul 14 21:20:45.930168 systemd-networkd[1443]: docker0: Link UP Jul 14 21:20:46.363861 dockerd[1738]: time="2025-07-14T21:20:46.362829600Z" level=info msg="Loading containers: done." Jul 14 21:20:46.397545 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3920594838-merged.mount: Deactivated successfully. Jul 14 21:20:46.546370 dockerd[1738]: time="2025-07-14T21:20:46.545619983Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 21:20:46.546370 dockerd[1738]: time="2025-07-14T21:20:46.545801894Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 14 21:20:46.546370 dockerd[1738]: time="2025-07-14T21:20:46.546004013Z" level=info msg="Daemon has completed initialization" Jul 14 21:20:46.915930 dockerd[1738]: time="2025-07-14T21:20:46.915841984Z" level=info msg="API listen on /run/docker.sock" Jul 14 21:20:46.916113 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 21:20:47.704712 containerd[1514]: time="2025-07-14T21:20:47.704648439Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 14 21:20:49.025537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043361166.mount: Deactivated successfully. Jul 14 21:20:50.388969 containerd[1514]: time="2025-07-14T21:20:50.388864641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:50.389767 containerd[1514]: time="2025-07-14T21:20:50.389716790Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 14 21:20:50.390935 containerd[1514]: time="2025-07-14T21:20:50.390889079Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:50.393535 containerd[1514]: time="2025-07-14T21:20:50.393503012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:50.394570 containerd[1514]: time="2025-07-14T21:20:50.394535158Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.689826235s" Jul 14 21:20:50.394638 containerd[1514]: time="2025-07-14T21:20:50.394575363Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 14 21:20:50.395519 containerd[1514]: time="2025-07-14T21:20:50.395492744Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 14 21:20:51.675650 containerd[1514]: time="2025-07-14T21:20:51.675556508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:51.676617 containerd[1514]: time="2025-07-14T21:20:51.676547086Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 14 21:20:51.677912 containerd[1514]: time="2025-07-14T21:20:51.677860850Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:51.681576 containerd[1514]: time="2025-07-14T21:20:51.681544780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:51.683136 containerd[1514]: time="2025-07-14T21:20:51.683093575Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.287564102s" Jul 14 21:20:51.683194 containerd[1514]: time="2025-07-14T21:20:51.683133830Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 14 21:20:51.683708 containerd[1514]: time="2025-07-14T21:20:51.683663504Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 14 21:20:54.139673 containerd[1514]: time="2025-07-14T21:20:54.139589870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:54.170885 containerd[1514]: time="2025-07-14T21:20:54.170771428Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 14 21:20:54.224605 containerd[1514]: time="2025-07-14T21:20:54.224417743Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:54.283520 containerd[1514]: time="2025-07-14T21:20:54.283435934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:54.284541 containerd[1514]: time="2025-07-14T21:20:54.284447531Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.600747198s" Jul 14 21:20:54.284541 containerd[1514]: time="2025-07-14T21:20:54.284495922Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 14 21:20:54.285041 containerd[1514]: time="2025-07-14T21:20:54.285015747Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 14 21:20:55.518876 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 21:20:55.535278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:20:55.718462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:20:55.723840 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:20:56.416962 kubelet[2022]: E0714 21:20:56.416861 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:20:56.422380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:20:56.422717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:20:56.423293 systemd[1]: kubelet.service: Consumed 880ms CPU time, 110.5M memory peak. Jul 14 21:21:00.047822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678949897.mount: Deactivated successfully. Jul 14 21:21:00.468345 containerd[1514]: time="2025-07-14T21:21:00.468119553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:00.469368 containerd[1514]: time="2025-07-14T21:21:00.469304375Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 14 21:21:00.471587 containerd[1514]: time="2025-07-14T21:21:00.471494834Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:00.474276 containerd[1514]: time="2025-07-14T21:21:00.474197854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:00.475363 containerd[1514]: time="2025-07-14T21:21:00.475305161Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 6.190255881s" Jul 14 21:21:00.475363 containerd[1514]: time="2025-07-14T21:21:00.475346849Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 14 21:21:00.475926 containerd[1514]: time="2025-07-14T21:21:00.475872285Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 21:21:01.065969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount713960422.mount: Deactivated successfully. Jul 14 21:21:02.179024 containerd[1514]: time="2025-07-14T21:21:02.178950032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:02.180678 containerd[1514]: time="2025-07-14T21:21:02.180527561Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 14 21:21:02.182362 containerd[1514]: time="2025-07-14T21:21:02.182302540Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:02.187092 containerd[1514]: time="2025-07-14T21:21:02.186991605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:02.188511 containerd[1514]: time="2025-07-14T21:21:02.188415676Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.712464073s" Jul 14 21:21:02.188511 containerd[1514]: time="2025-07-14T21:21:02.188490617Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 14 21:21:02.189330 containerd[1514]: time="2025-07-14T21:21:02.189197232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 21:21:02.649057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3639519095.mount: Deactivated successfully. Jul 14 21:21:02.657238 containerd[1514]: time="2025-07-14T21:21:02.657176959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:02.658228 containerd[1514]: time="2025-07-14T21:21:02.658145766Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 14 21:21:02.660014 containerd[1514]: time="2025-07-14T21:21:02.659966611Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:02.663084 containerd[1514]: time="2025-07-14T21:21:02.663023104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:02.663784 containerd[1514]: time="2025-07-14T21:21:02.663727566Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 474.470351ms" Jul 14 21:21:02.663784 containerd[1514]: time="2025-07-14T21:21:02.663771248Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 21:21:02.664457 containerd[1514]: time="2025-07-14T21:21:02.664398304Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 21:21:03.199942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1531569481.mount: Deactivated successfully. Jul 14 21:21:06.518941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 21:21:06.533227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:21:06.715755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:21:06.720111 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:21:06.823610 kubelet[2123]: E0714 21:21:06.823439 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:21:06.828190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:21:06.828455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:21:06.828844 systemd[1]: kubelet.service: Consumed 227ms CPU time, 110.8M memory peak. Jul 14 21:21:10.628184 containerd[1514]: time="2025-07-14T21:21:10.628087540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:10.629495 containerd[1514]: time="2025-07-14T21:21:10.629391888Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 14 21:21:10.631026 containerd[1514]: time="2025-07-14T21:21:10.630983860Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:10.635118 containerd[1514]: time="2025-07-14T21:21:10.635037660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:10.636472 containerd[1514]: time="2025-07-14T21:21:10.636418463Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 7.971968793s" Jul 14 21:21:10.636472 containerd[1514]: time="2025-07-14T21:21:10.636471303Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 14 21:21:13.013934 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:21:13.014193 systemd[1]: kubelet.service: Consumed 227ms CPU time, 110.8M memory peak. Jul 14 21:21:13.025217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:21:13.057017 systemd[1]: Reload requested from client PID 2197 ('systemctl') (unit session-9.scope)... Jul 14 21:21:13.057036 systemd[1]: Reloading... Jul 14 21:21:13.163951 zram_generator::config[2247]: No configuration found. Jul 14 21:21:13.488844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:21:13.614341 systemd[1]: Reloading finished in 556 ms. Jul 14 21:21:13.662442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:21:13.666667 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:21:13.669582 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:21:13.669997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:21:13.670049 systemd[1]: kubelet.service: Consumed 190ms CPU time, 98.4M memory peak. Jul 14 21:21:13.672312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:21:13.854480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:21:13.860339 (kubelet)[2291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:21:13.913383 kubelet[2291]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:21:13.913383 kubelet[2291]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 21:21:13.913383 kubelet[2291]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:21:13.913872 kubelet[2291]: I0714 21:21:13.913441 2291 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:21:14.096302 kubelet[2291]: I0714 21:21:14.096230 2291 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 21:21:14.096302 kubelet[2291]: I0714 21:21:14.096284 2291 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:21:14.096908 kubelet[2291]: I0714 21:21:14.096787 2291 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 21:21:14.178528 kubelet[2291]: E0714 21:21:14.178340 2291 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:21:14.179332 kubelet[2291]: I0714 21:21:14.179281 2291 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:21:14.195354 kubelet[2291]: E0714 21:21:14.195229 2291 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:21:14.195354 kubelet[2291]: I0714 21:21:14.195305 2291 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:21:14.204760 kubelet[2291]: I0714 21:21:14.204699 2291 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:21:14.205972 kubelet[2291]: I0714 21:21:14.205931 2291 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 21:21:14.206186 kubelet[2291]: I0714 21:21:14.206134 2291 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:21:14.206487 kubelet[2291]: I0714 21:21:14.206175 2291 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:21:14.206600 kubelet[2291]: I0714 21:21:14.206497 2291 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:21:14.206600 kubelet[2291]: I0714 21:21:14.206510 2291 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 21:21:14.206695 kubelet[2291]: I0714 21:21:14.206667 2291 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:21:14.209654 kubelet[2291]: I0714 21:21:14.209603 2291 kubelet.go:408] "Attempting to sync node with API server" Jul 14 21:21:14.209728 kubelet[2291]: I0714 21:21:14.209673 2291 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:21:14.209772 kubelet[2291]: I0714 21:21:14.209759 2291 kubelet.go:314] "Adding apiserver pod source" Jul 14 21:21:14.210976 kubelet[2291]: I0714 21:21:14.209803 2291 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:21:14.216374 kubelet[2291]: W0714 21:21:14.216137 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jul 14 21:21:14.216374 kubelet[2291]: E0714 21:21:14.216277 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:21:14.216586 kubelet[2291]: I0714 21:21:14.216554 2291 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 14 21:21:14.216810 kubelet[2291]: W0714 21:21:14.216749 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jul 14 21:21:14.216810 kubelet[2291]: E0714 21:21:14.216791 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:21:14.217624 kubelet[2291]: I0714 21:21:14.217596 2291 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:21:14.218591 kubelet[2291]: W0714 21:21:14.218553 2291 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 21:21:14.221439 kubelet[2291]: I0714 21:21:14.221388 2291 server.go:1274] "Started kubelet" Jul 14 21:21:14.221587 kubelet[2291]: I0714 21:21:14.221465 2291 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:21:14.222057 kubelet[2291]: I0714 21:21:14.221986 2291 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:21:14.223175 kubelet[2291]: I0714 21:21:14.222813 2291 server.go:449] "Adding debug handlers to kubelet server" Jul 14 21:21:14.223255 kubelet[2291]: I0714 21:21:14.223154 2291 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:21:14.223416 kubelet[2291]: I0714 21:21:14.223391 2291 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:21:14.226780 kubelet[2291]: I0714 21:21:14.226675 2291 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:21:14.228169 kubelet[2291]: E0714 21:21:14.227882 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:21:14.228327 kubelet[2291]: I0714 21:21:14.228311 2291 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 21:21:14.228731 kubelet[2291]: I0714 21:21:14.228627 2291 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 21:21:14.228731 kubelet[2291]: I0714 21:21:14.228694 2291 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:21:14.229474 kubelet[2291]: W0714 21:21:14.229412 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jul 14 21:21:14.229543 kubelet[2291]: E0714 21:21:14.229480 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:21:14.230299 kubelet[2291]: E0714 21:21:14.229625 2291 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:21:14.230299 kubelet[2291]: E0714 21:21:14.229692 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="200ms" Jul 14 21:21:14.232317 kubelet[2291]: I0714 21:21:14.232288 2291 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:21:14.232317 kubelet[2291]: I0714 21:21:14.232313 2291 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:21:14.232416 kubelet[2291]: I0714 21:21:14.232398 2291 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:21:14.232658 kubelet[2291]: E0714 21:21:14.230854 2291 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.72:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.72:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523aff8adcf7f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:21:14.221352946 +0000 UTC m=+0.353738273,LastTimestamp:2025-07-14 21:21:14.221352946 +0000 UTC m=+0.353738273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:21:14.249108 kubelet[2291]: I0714 21:21:14.249056 2291 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:21:14.251194 kubelet[2291]: I0714 21:21:14.251171 2291 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:21:14.251276 kubelet[2291]: I0714 21:21:14.251211 2291 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 21:21:14.251941 kubelet[2291]: I0714 21:21:14.251388 2291 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 21:21:14.251941 kubelet[2291]: E0714 21:21:14.251447 2291 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:21:14.252140 kubelet[2291]: W0714 21:21:14.252086 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jul 14 21:21:14.252179 kubelet[2291]: E0714 21:21:14.252155 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:21:14.256156 kubelet[2291]: I0714 21:21:14.256058 2291 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 21:21:14.256156 kubelet[2291]: I0714 21:21:14.256115 2291 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 21:21:14.256156 kubelet[2291]: I0714 21:21:14.256139 2291 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:21:14.329017 kubelet[2291]: E0714 21:21:14.328949 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:21:14.352567 kubelet[2291]: E0714 21:21:14.352412 2291 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 21:21:14.430047 kubelet[2291]: E0714 21:21:14.429801 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:21:14.430478 kubelet[2291]: E0714 21:21:14.430430 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="400ms" Jul 14 21:21:14.530609 kubelet[2291]: E0714 21:21:14.530525 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:21:14.553187 kubelet[2291]: E0714 21:21:14.553130 2291 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 21:21:14.631658 kubelet[2291]: E0714 21:21:14.631578 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:21:14.732915 kubelet[2291]: E0714 21:21:14.732711 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:21:14.831657 kubelet[2291]: E0714 21:21:14.831596 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="800ms" Jul 14 21:21:14.833713 kubelet[2291]: E0714 21:21:14.833653 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:21:14.912668 kubelet[2291]: I0714 21:21:14.912600 2291 policy_none.go:49] "None policy: Start" Jul 14 21:21:14.913729 kubelet[2291]: I0714 21:21:14.913702 2291 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 21:21:14.914181 kubelet[2291]: I0714 21:21:14.913734 2291 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:21:14.934354 kubelet[2291]: E0714 21:21:14.934299 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:21:14.953584 kubelet[2291]: E0714 21:21:14.953510 2291 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 21:21:15.035368 kubelet[2291]: E0714 21:21:15.035184 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:21:15.100958 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 21:21:15.113390 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 21:21:15.114743 kubelet[2291]: W0714 21:21:15.114700 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jul 14 21:21:15.114847 kubelet[2291]: E0714 21:21:15.114752 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:21:15.117135 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 21:21:15.126050 kubelet[2291]: I0714 21:21:15.125982 2291 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:21:15.126315 kubelet[2291]: I0714 21:21:15.126294 2291 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:21:15.126367 kubelet[2291]: I0714 21:21:15.126317 2291 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:21:15.126598 kubelet[2291]: I0714 21:21:15.126569 2291 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:21:15.127849 kubelet[2291]: E0714 21:21:15.127740 2291 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 21:21:15.230725 kubelet[2291]: I0714 21:21:15.230628 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:21:15.231284 kubelet[2291]: E0714 21:21:15.231242 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Jul 14 21:21:15.236569 kubelet[2291]: W0714 21:21:15.236416 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jul 14 21:21:15.236569 kubelet[2291]: E0714 21:21:15.236486 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:21:15.433500 kubelet[2291]: I0714 21:21:15.433369 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:21:15.433797 kubelet[2291]: E0714 21:21:15.433765 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Jul 14 21:21:15.516792 kubelet[2291]: W0714 21:21:15.516726 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jul 14 21:21:15.516792 kubelet[2291]: E0714 21:21:15.516791 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:21:15.632504 kubelet[2291]: E0714 21:21:15.632422 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="1.6s" Jul 14 21:21:15.738707 kubelet[2291]: W0714 21:21:15.738539 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jul 14 21:21:15.738707 kubelet[2291]: E0714 21:21:15.738599 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:21:15.765070 systemd[1]: Created slice kubepods-burstable-pod5fafb31b40b9757ef62bfe177d9295da.slice - libcontainer container kubepods-burstable-pod5fafb31b40b9757ef62bfe177d9295da.slice. Jul 14 21:21:15.786664 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 14 21:21:15.791318 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 14 21:21:15.835868 kubelet[2291]: I0714 21:21:15.835824 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:21:15.836029 kubelet[2291]: I0714 21:21:15.835833 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:21:15.836029 kubelet[2291]: I0714 21:21:15.835960 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:21:15.836029 kubelet[2291]: I0714 21:21:15.835981 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:21:15.836029 kubelet[2291]: I0714 21:21:15.835999 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fafb31b40b9757ef62bfe177d9295da-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5fafb31b40b9757ef62bfe177d9295da\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:21:15.836029 kubelet[2291]: I0714 21:21:15.836016 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fafb31b40b9757ef62bfe177d9295da-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5fafb31b40b9757ef62bfe177d9295da\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:21:15.836234 kubelet[2291]: I0714 21:21:15.836040 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:21:15.836234 kubelet[2291]: I0714 21:21:15.836055 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:21:15.836234 kubelet[2291]: I0714 21:21:15.836069 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:21:15.836234 kubelet[2291]: I0714 21:21:15.836083 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fafb31b40b9757ef62bfe177d9295da-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5fafb31b40b9757ef62bfe177d9295da\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:21:15.836411 kubelet[2291]: E0714 21:21:15.836236 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Jul 14 21:21:16.024635 update_engine[1504]: I20250714 21:21:16.024518 1504 update_attempter.cc:509] Updating boot flags... Jul 14 21:21:16.083988 kubelet[2291]: E0714 21:21:16.083876 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:16.084881 containerd[1514]: time="2025-07-14T21:21:16.084816125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5fafb31b40b9757ef62bfe177d9295da,Namespace:kube-system,Attempt:0,}" Jul 14 21:21:16.089151 kubelet[2291]: E0714 21:21:16.089119 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:16.089730 containerd[1514]: time="2025-07-14T21:21:16.089680975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 14 21:21:16.093963 kubelet[2291]: E0714 21:21:16.093905 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:16.094371 containerd[1514]: time="2025-07-14T21:21:16.094329465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 14 21:21:16.285396 kubelet[2291]: E0714 21:21:16.285240 2291 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:21:16.537957 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2331) Jul 14 21:21:16.585965 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2334) Jul 14 21:21:16.631932 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2334) Jul 14 21:21:16.644814 kubelet[2291]: I0714 21:21:16.643842 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:21:16.644814 kubelet[2291]: E0714 21:21:16.644326 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Jul 14 21:21:17.086675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166423197.mount: Deactivated successfully. Jul 14 21:21:17.099930 containerd[1514]: time="2025-07-14T21:21:17.099831658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:21:17.101183 containerd[1514]: time="2025-07-14T21:21:17.101127743Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:21:17.103518 containerd[1514]: time="2025-07-14T21:21:17.103433382Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 14 21:21:17.105386 containerd[1514]: time="2025-07-14T21:21:17.105293550Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 21:21:17.106724 containerd[1514]: time="2025-07-14T21:21:17.106659666Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:21:17.108593 containerd[1514]: time="2025-07-14T21:21:17.108475361Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:21:17.109932 containerd[1514]: time="2025-07-14T21:21:17.109278837Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 21:21:17.111793 containerd[1514]: time="2025-07-14T21:21:17.111715062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:21:17.114679 containerd[1514]: time="2025-07-14T21:21:17.114615894Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.024818999s" Jul 14 21:21:17.115907 containerd[1514]: time="2025-07-14T21:21:17.115835203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.030868062s" Jul 14 21:21:17.120513 containerd[1514]: time="2025-07-14T21:21:17.120446812Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.026010895s" Jul 14 21:21:17.234030 kubelet[2291]: E0714 21:21:17.233945 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="3.2s" Jul 14 21:21:17.284086 containerd[1514]: time="2025-07-14T21:21:17.283620851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:21:17.284086 containerd[1514]: time="2025-07-14T21:21:17.283730618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:21:17.284086 containerd[1514]: time="2025-07-14T21:21:17.283751507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:17.284086 containerd[1514]: time="2025-07-14T21:21:17.283637843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:21:17.284086 containerd[1514]: time="2025-07-14T21:21:17.283719517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:21:17.284086 containerd[1514]: time="2025-07-14T21:21:17.283738313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:17.284086 containerd[1514]: time="2025-07-14T21:21:17.283845465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:17.294146 containerd[1514]: time="2025-07-14T21:21:17.293802455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:17.298600 containerd[1514]: time="2025-07-14T21:21:17.297354915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:21:17.298600 containerd[1514]: time="2025-07-14T21:21:17.297522692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:21:17.298600 containerd[1514]: time="2025-07-14T21:21:17.297566425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:17.298600 containerd[1514]: time="2025-07-14T21:21:17.297682734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:17.327240 systemd[1]: Started cri-containerd-6006a4bad567ec95b7704eaff85dd3edc8de7c9684a007579974afb20baf0951.scope - libcontainer container 6006a4bad567ec95b7704eaff85dd3edc8de7c9684a007579974afb20baf0951. Jul 14 21:21:17.332446 systemd[1]: Started cri-containerd-0223ff6b02552550b58404e4ffd5571206fc7cad4449617175d12d667de3f93f.scope - libcontainer container 0223ff6b02552550b58404e4ffd5571206fc7cad4449617175d12d667de3f93f. Jul 14 21:21:17.340765 systemd[1]: Started cri-containerd-adb3f43da312177fe3409074bb1a103c5c8d4ff9f69974b6c8426e102e435d65.scope - libcontainer container adb3f43da312177fe3409074bb1a103c5c8d4ff9f69974b6c8426e102e435d65. Jul 14 21:21:17.430036 containerd[1514]: time="2025-07-14T21:21:17.429965308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5fafb31b40b9757ef62bfe177d9295da,Namespace:kube-system,Attempt:0,} returns sandbox id \"6006a4bad567ec95b7704eaff85dd3edc8de7c9684a007579974afb20baf0951\"" Jul 14 21:21:17.434749 kubelet[2291]: E0714 21:21:17.434715 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:17.438185 containerd[1514]: time="2025-07-14T21:21:17.438045058Z" level=info msg="CreateContainer within sandbox \"6006a4bad567ec95b7704eaff85dd3edc8de7c9684a007579974afb20baf0951\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 21:21:17.447744 containerd[1514]: time="2025-07-14T21:21:17.447684909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"0223ff6b02552550b58404e4ffd5571206fc7cad4449617175d12d667de3f93f\"" Jul 14 21:21:17.452738 kubelet[2291]: E0714 21:21:17.452695 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:17.456144 containerd[1514]: time="2025-07-14T21:21:17.456058434Z" level=info msg="CreateContainer within sandbox \"0223ff6b02552550b58404e4ffd5571206fc7cad4449617175d12d667de3f93f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 21:21:17.457921 containerd[1514]: time="2025-07-14T21:21:17.456440544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"adb3f43da312177fe3409074bb1a103c5c8d4ff9f69974b6c8426e102e435d65\"" Jul 14 21:21:17.457977 kubelet[2291]: E0714 21:21:17.457645 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:17.460357 containerd[1514]: time="2025-07-14T21:21:17.460320702Z" level=info msg="CreateContainer within sandbox \"adb3f43da312177fe3409074bb1a103c5c8d4ff9f69974b6c8426e102e435d65\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 21:21:17.478652 containerd[1514]: time="2025-07-14T21:21:17.478441351Z" level=info msg="CreateContainer within sandbox \"6006a4bad567ec95b7704eaff85dd3edc8de7c9684a007579974afb20baf0951\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a79b380fe35ea233b228e8cfce4fcd54ac27163bca40e60670051f6142452d25\"" Jul 14 21:21:17.479431 containerd[1514]: time="2025-07-14T21:21:17.479376445Z" level=info msg="StartContainer for \"a79b380fe35ea233b228e8cfce4fcd54ac27163bca40e60670051f6142452d25\"" Jul 14 21:21:17.495513 containerd[1514]: time="2025-07-14T21:21:17.495326669Z" level=info msg="CreateContainer within sandbox \"0223ff6b02552550b58404e4ffd5571206fc7cad4449617175d12d667de3f93f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c72c52e1d69bfb9a9602309b6194dcc092d093aab4bc7ef11f894677e6f9d3ac\"" Jul 14 21:21:17.496649 containerd[1514]: time="2025-07-14T21:21:17.496575985Z" level=info msg="StartContainer for \"c72c52e1d69bfb9a9602309b6194dcc092d093aab4bc7ef11f894677e6f9d3ac\"" Jul 14 21:21:17.504010 containerd[1514]: time="2025-07-14T21:21:17.503851167Z" level=info msg="CreateContainer within sandbox \"adb3f43da312177fe3409074bb1a103c5c8d4ff9f69974b6c8426e102e435d65\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a178613381fdfcf0d18bab503d0234be484e2667b9eca2aa2781d82f410f2280\"" Jul 14 21:21:17.505126 containerd[1514]: time="2025-07-14T21:21:17.505061701Z" level=info msg="StartContainer for \"a178613381fdfcf0d18bab503d0234be484e2667b9eca2aa2781d82f410f2280\"" Jul 14 21:21:17.568344 systemd[1]: Started cri-containerd-a79b380fe35ea233b228e8cfce4fcd54ac27163bca40e60670051f6142452d25.scope - libcontainer container a79b380fe35ea233b228e8cfce4fcd54ac27163bca40e60670051f6142452d25. Jul 14 21:21:17.594169 systemd[1]: Started cri-containerd-a178613381fdfcf0d18bab503d0234be484e2667b9eca2aa2781d82f410f2280.scope - libcontainer container a178613381fdfcf0d18bab503d0234be484e2667b9eca2aa2781d82f410f2280. Jul 14 21:21:17.600069 systemd[1]: Started cri-containerd-c72c52e1d69bfb9a9602309b6194dcc092d093aab4bc7ef11f894677e6f9d3ac.scope - libcontainer container c72c52e1d69bfb9a9602309b6194dcc092d093aab4bc7ef11f894677e6f9d3ac. Jul 14 21:21:17.613833 kubelet[2291]: W0714 21:21:17.613746 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jul 14 21:21:17.613966 kubelet[2291]: E0714 21:21:17.613829 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:21:17.664472 containerd[1514]: time="2025-07-14T21:21:17.664400346Z" level=info msg="StartContainer for \"a178613381fdfcf0d18bab503d0234be484e2667b9eca2aa2781d82f410f2280\" returns successfully" Jul 14 21:21:17.674276 containerd[1514]: time="2025-07-14T21:21:17.674160495Z" level=info msg="StartContainer for \"a79b380fe35ea233b228e8cfce4fcd54ac27163bca40e60670051f6142452d25\" returns successfully" Jul 14 21:21:17.689082 containerd[1514]: time="2025-07-14T21:21:17.689006547Z" level=info msg="StartContainer for \"c72c52e1d69bfb9a9602309b6194dcc092d093aab4bc7ef11f894677e6f9d3ac\" returns successfully" Jul 14 21:21:18.246162 kubelet[2291]: I0714 21:21:18.246100 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:21:18.263886 kubelet[2291]: E0714 21:21:18.263736 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:18.267664 kubelet[2291]: E0714 21:21:18.267585 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:18.269003 kubelet[2291]: E0714 21:21:18.268980 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:19.272846 kubelet[2291]: E0714 21:21:19.272795 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:19.272846 kubelet[2291]: E0714 21:21:19.272809 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:19.537548 kubelet[2291]: I0714 21:21:19.537345 2291 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 21:21:19.537548 kubelet[2291]: E0714 21:21:19.537396 2291 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 21:21:20.213714 kubelet[2291]: I0714 21:21:20.213568 2291 apiserver.go:52] "Watching apiserver" Jul 14 21:21:20.229347 kubelet[2291]: I0714 21:21:20.229258 2291 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 21:21:21.406170 systemd[1]: Reload requested from client PID 2588 ('systemctl') (unit session-9.scope)... Jul 14 21:21:21.406191 systemd[1]: Reloading... Jul 14 21:21:21.508932 zram_generator::config[2638]: No configuration found. Jul 14 21:21:21.631307 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:21:21.755982 systemd[1]: Reloading finished in 349 ms. Jul 14 21:21:21.790730 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:21:21.804589 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:21:21.804980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:21:21.805053 systemd[1]: kubelet.service: Consumed 937ms CPU time, 133.7M memory peak. Jul 14 21:21:21.815216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:21:22.023587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:21:22.036354 (kubelet)[2677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:21:22.082549 kubelet[2677]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:21:22.082549 kubelet[2677]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 21:21:22.082549 kubelet[2677]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:21:22.083070 kubelet[2677]: I0714 21:21:22.082603 2677 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:21:22.088817 kubelet[2677]: I0714 21:21:22.088764 2677 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 21:21:22.088817 kubelet[2677]: I0714 21:21:22.088794 2677 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:21:22.089027 kubelet[2677]: I0714 21:21:22.089009 2677 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 21:21:22.090527 kubelet[2677]: I0714 21:21:22.090498 2677 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 21:21:22.092905 kubelet[2677]: I0714 21:21:22.092558 2677 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:21:22.097668 kubelet[2677]: E0714 21:21:22.097617 2677 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:21:22.097668 kubelet[2677]: I0714 21:21:22.097660 2677 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:21:22.102870 kubelet[2677]: I0714 21:21:22.102812 2677 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:21:22.103012 kubelet[2677]: I0714 21:21:22.102947 2677 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 21:21:22.103126 kubelet[2677]: I0714 21:21:22.103072 2677 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:21:22.103279 kubelet[2677]: I0714 21:21:22.103102 2677 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:21:22.103279 kubelet[2677]: I0714 21:21:22.103277 2677 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:21:22.103397 kubelet[2677]: I0714 21:21:22.103287 2677 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 21:21:22.103397 kubelet[2677]: I0714 21:21:22.103314 2677 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:21:22.103444 kubelet[2677]: I0714 21:21:22.103421 2677 kubelet.go:408] "Attempting to sync node with API server" Jul 14 21:21:22.103444 kubelet[2677]: I0714 21:21:22.103433 2677 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:21:22.103490 kubelet[2677]: I0714 21:21:22.103465 2677 kubelet.go:314] "Adding apiserver pod source" Jul 14 21:21:22.103490 kubelet[2677]: I0714 21:21:22.103477 2677 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:21:22.104581 kubelet[2677]: I0714 21:21:22.104508 2677 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 14 21:21:22.105043 kubelet[2677]: I0714 21:21:22.105012 2677 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:21:22.107924 kubelet[2677]: I0714 21:21:22.105532 2677 server.go:1274] "Started kubelet" Jul 14 21:21:22.107924 kubelet[2677]: I0714 21:21:22.105753 2677 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:21:22.107924 kubelet[2677]: I0714 21:21:22.105812 2677 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:21:22.107924 kubelet[2677]: I0714 21:21:22.106144 2677 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:21:22.107924 kubelet[2677]: I0714 21:21:22.106679 2677 server.go:449] "Adding debug handlers to kubelet server" Jul 14 21:21:22.107924 kubelet[2677]: I0714 21:21:22.107618 2677 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:21:22.111748 kubelet[2677]: I0714 21:21:22.110146 2677 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:21:22.116257 kubelet[2677]: E0714 21:21:22.116078 2677 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:21:22.117493 kubelet[2677]: I0714 21:21:22.117464 2677 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 21:21:22.117648 kubelet[2677]: I0714 21:21:22.117631 2677 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 21:21:22.118977 kubelet[2677]: I0714 21:21:22.117974 2677 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:21:22.119443 kubelet[2677]: I0714 21:21:22.119244 2677 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:21:22.119443 kubelet[2677]: I0714 21:21:22.119353 2677 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:21:22.121326 kubelet[2677]: I0714 21:21:22.121303 2677 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:21:22.129616 kubelet[2677]: I0714 21:21:22.129548 2677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:21:22.131205 kubelet[2677]: I0714 21:21:22.131172 2677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:21:22.131270 kubelet[2677]: I0714 21:21:22.131228 2677 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 21:21:22.131270 kubelet[2677]: I0714 21:21:22.131250 2677 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 21:21:22.131323 kubelet[2677]: E0714 21:21:22.131301 2677 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:21:22.155400 kubelet[2677]: I0714 21:21:22.155362 2677 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 21:21:22.155400 kubelet[2677]: I0714 21:21:22.155383 2677 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 21:21:22.155400 kubelet[2677]: I0714 21:21:22.155401 2677 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:21:22.155605 kubelet[2677]: I0714 21:21:22.155536 2677 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 21:21:22.155605 kubelet[2677]: I0714 21:21:22.155546 2677 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 21:21:22.155605 kubelet[2677]: I0714 21:21:22.155564 2677 policy_none.go:49] "None policy: Start" Jul 14 21:21:22.156162 kubelet[2677]: I0714 21:21:22.156129 2677 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 21:21:22.156211 kubelet[2677]: I0714 21:21:22.156171 2677 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:21:22.156369 kubelet[2677]: I0714 21:21:22.156344 2677 state_mem.go:75] "Updated machine memory state" Jul 14 21:21:22.161916 kubelet[2677]: I0714 21:21:22.161414 2677 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:21:22.161916 kubelet[2677]: I0714 21:21:22.161626 2677 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:21:22.161916 kubelet[2677]: I0714 21:21:22.161640 2677 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:21:22.164497 kubelet[2677]: I0714 21:21:22.164115 2677 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:21:22.269836 kubelet[2677]: I0714 21:21:22.269793 2677 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:21:22.276554 kubelet[2677]: I0714 21:21:22.276423 2677 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 21:21:22.276554 kubelet[2677]: I0714 21:21:22.276517 2677 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 21:21:22.318863 kubelet[2677]: I0714 21:21:22.318792 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:21:22.318863 kubelet[2677]: I0714 21:21:22.318835 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fafb31b40b9757ef62bfe177d9295da-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5fafb31b40b9757ef62bfe177d9295da\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:21:22.318863 kubelet[2677]: I0714 21:21:22.318851 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fafb31b40b9757ef62bfe177d9295da-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5fafb31b40b9757ef62bfe177d9295da\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:21:22.318863 kubelet[2677]: I0714 21:21:22.318869 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:21:22.319137 kubelet[2677]: I0714 21:21:22.318884 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:21:22.319137 kubelet[2677]: I0714 21:21:22.318933 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fafb31b40b9757ef62bfe177d9295da-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5fafb31b40b9757ef62bfe177d9295da\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:21:22.319137 kubelet[2677]: I0714 21:21:22.319065 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:21:22.319137 kubelet[2677]: I0714 21:21:22.319126 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:21:22.319252 kubelet[2677]: I0714 21:21:22.319149 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:21:22.407541 sudo[2715]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 14 21:21:22.407939 sudo[2715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 14 21:21:22.554496 kubelet[2677]: E0714 21:21:22.554341 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:22.555012 kubelet[2677]: E0714 21:21:22.554971 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:22.555279 kubelet[2677]: E0714 21:21:22.555259 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:22.906779 sudo[2715]: pam_unix(sudo:session): session closed for user root Jul 14 21:21:23.103840 kubelet[2677]: I0714 21:21:23.103807 2677 apiserver.go:52] "Watching apiserver" Jul 14 21:21:23.118403 kubelet[2677]: I0714 21:21:23.118355 2677 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 21:21:23.145392 kubelet[2677]: E0714 21:21:23.145344 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:23.146344 kubelet[2677]: E0714 21:21:23.145553 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:23.146344 kubelet[2677]: E0714 21:21:23.145644 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:23.538575 kubelet[2677]: I0714 21:21:23.537300 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.537274108 podStartE2EDuration="1.537274108s" podCreationTimestamp="2025-07-14 21:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:21:23.365232498 +0000 UTC m=+1.324289019" watchObservedRunningTime="2025-07-14 21:21:23.537274108 +0000 UTC m=+1.496330629" Jul 14 21:21:23.580764 kubelet[2677]: I0714 21:21:23.580638 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.58060985 podStartE2EDuration="1.58060985s" podCreationTimestamp="2025-07-14 21:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:21:23.540535769 +0000 UTC m=+1.499592320" watchObservedRunningTime="2025-07-14 21:21:23.58060985 +0000 UTC m=+1.539666381" Jul 14 21:21:23.581045 kubelet[2677]: I0714 21:21:23.580792 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.580784229 podStartE2EDuration="1.580784229s" podCreationTimestamp="2025-07-14 21:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:21:23.58078493 +0000 UTC m=+1.539841471" watchObservedRunningTime="2025-07-14 21:21:23.580784229 +0000 UTC m=+1.539840770" Jul 14 21:21:24.147220 kubelet[2677]: E0714 21:21:24.147179 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:24.147843 kubelet[2677]: E0714 21:21:24.147426 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:24.612563 sudo[1718]: pam_unix(sudo:session): session closed for user root Jul 14 21:21:24.614171 sshd[1717]: Connection closed by 10.0.0.1 port 45716 Jul 14 21:21:24.649701 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:24.679685 systemd[1]: sshd@8-10.0.0.72:22-10.0.0.1:45716.service: Deactivated successfully. Jul 14 21:21:24.682137 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 21:21:24.682406 systemd[1]: session-9.scope: Consumed 4.737s CPU time, 254.8M memory peak. Jul 14 21:21:24.684821 systemd-logind[1501]: Session 9 logged out. Waiting for processes to exit. Jul 14 21:21:24.686073 systemd-logind[1501]: Removed session 9. Jul 14 21:21:27.136716 kubelet[2677]: I0714 21:21:27.136670 2677 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 21:21:27.137226 containerd[1514]: time="2025-07-14T21:21:27.137094568Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 21:21:27.137489 kubelet[2677]: I0714 21:21:27.137383 2677 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 21:21:27.810414 systemd[1]: Created slice kubepods-besteffort-pod76f6d885_db39_4c5a_bfa2_e4cf7cf3013b.slice - libcontainer container kubepods-besteffort-pod76f6d885_db39_4c5a_bfa2_e4cf7cf3013b.slice. Jul 14 21:21:27.825482 systemd[1]: Created slice kubepods-burstable-pod6eb46e6c_a908_4286_9cd2_b7f9d9c52ed5.slice - libcontainer container kubepods-burstable-pod6eb46e6c_a908_4286_9cd2_b7f9d9c52ed5.slice. Jul 14 21:21:27.844100 kubelet[2677]: E0714 21:21:27.844065 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:27.846142 kubelet[2677]: I0714 21:21:27.846118 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-bpf-maps\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846218 kubelet[2677]: I0714 21:21:27.846147 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8v8f\" (UniqueName: \"kubernetes.io/projected/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-kube-api-access-z8v8f\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846218 kubelet[2677]: I0714 21:21:27.846165 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cilium-run\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846218 kubelet[2677]: I0714 21:21:27.846181 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-host-proc-sys-kernel\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846218 kubelet[2677]: I0714 21:21:27.846195 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-hubble-tls\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846218 kubelet[2677]: I0714 21:21:27.846209 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76f6d885-db39-4c5a-bfa2-e4cf7cf3013b-xtables-lock\") pod \"kube-proxy-tm6ml\" (UID: \"76f6d885-db39-4c5a-bfa2-e4cf7cf3013b\") " pod="kube-system/kube-proxy-tm6ml" Jul 14 21:21:27.846345 kubelet[2677]: I0714 21:21:27.846225 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-lib-modules\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846345 kubelet[2677]: I0714 21:21:27.846240 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-hostproc\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846345 kubelet[2677]: I0714 21:21:27.846254 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-xtables-lock\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846345 kubelet[2677]: I0714 21:21:27.846268 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cilium-config-path\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846345 kubelet[2677]: I0714 21:21:27.846283 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76f6d885-db39-4c5a-bfa2-e4cf7cf3013b-lib-modules\") pod \"kube-proxy-tm6ml\" (UID: \"76f6d885-db39-4c5a-bfa2-e4cf7cf3013b\") " pod="kube-system/kube-proxy-tm6ml" Jul 14 21:21:27.846471 kubelet[2677]: I0714 21:21:27.846336 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf24s\" (UniqueName: \"kubernetes.io/projected/76f6d885-db39-4c5a-bfa2-e4cf7cf3013b-kube-api-access-jf24s\") pod \"kube-proxy-tm6ml\" (UID: \"76f6d885-db39-4c5a-bfa2-e4cf7cf3013b\") " pod="kube-system/kube-proxy-tm6ml" Jul 14 21:21:27.846471 kubelet[2677]: I0714 21:21:27.846385 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76f6d885-db39-4c5a-bfa2-e4cf7cf3013b-kube-proxy\") pod \"kube-proxy-tm6ml\" (UID: \"76f6d885-db39-4c5a-bfa2-e4cf7cf3013b\") " pod="kube-system/kube-proxy-tm6ml" Jul 14 21:21:27.846471 kubelet[2677]: I0714 21:21:27.846405 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-clustermesh-secrets\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846471 kubelet[2677]: I0714 21:21:27.846421 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-host-proc-sys-net\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846471 kubelet[2677]: I0714 21:21:27.846437 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cilium-cgroup\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846589 kubelet[2677]: I0714 21:21:27.846452 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cni-path\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:27.846589 kubelet[2677]: I0714 21:21:27.846477 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-etc-cni-netd\") pod \"cilium-hg56m\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " pod="kube-system/cilium-hg56m" Jul 14 21:21:28.009737 systemd[1]: Created slice kubepods-besteffort-pod746e776b_60c4_4008_ad9e_ba1cfe05381d.slice - libcontainer container kubepods-besteffort-pod746e776b_60c4_4008_ad9e_ba1cfe05381d.slice. Jul 14 21:21:28.048063 kubelet[2677]: I0714 21:21:28.048002 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7nql\" (UniqueName: \"kubernetes.io/projected/746e776b-60c4-4008-ad9e-ba1cfe05381d-kube-api-access-g7nql\") pod \"cilium-operator-5d85765b45-fpctd\" (UID: \"746e776b-60c4-4008-ad9e-ba1cfe05381d\") " pod="kube-system/cilium-operator-5d85765b45-fpctd" Jul 14 21:21:28.048063 kubelet[2677]: I0714 21:21:28.048046 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/746e776b-60c4-4008-ad9e-ba1cfe05381d-cilium-config-path\") pod \"cilium-operator-5d85765b45-fpctd\" (UID: \"746e776b-60c4-4008-ad9e-ba1cfe05381d\") " pod="kube-system/cilium-operator-5d85765b45-fpctd" Jul 14 21:21:28.123330 kubelet[2677]: E0714 21:21:28.123173 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:28.124389 containerd[1514]: time="2025-07-14T21:21:28.124201336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tm6ml,Uid:76f6d885-db39-4c5a-bfa2-e4cf7cf3013b,Namespace:kube-system,Attempt:0,}" Jul 14 21:21:28.127869 kubelet[2677]: E0714 21:21:28.127841 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:28.128295 containerd[1514]: time="2025-07-14T21:21:28.128260851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hg56m,Uid:6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5,Namespace:kube-system,Attempt:0,}" Jul 14 21:21:28.152946 kubelet[2677]: E0714 21:21:28.152882 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:28.224553 containerd[1514]: time="2025-07-14T21:21:28.223819121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:21:28.224553 containerd[1514]: time="2025-07-14T21:21:28.224485254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:21:28.224553 containerd[1514]: time="2025-07-14T21:21:28.224496655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:28.225073 containerd[1514]: time="2025-07-14T21:21:28.224754801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:28.227928 containerd[1514]: time="2025-07-14T21:21:28.226721510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:21:28.227928 containerd[1514]: time="2025-07-14T21:21:28.226790209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:21:28.227928 containerd[1514]: time="2025-07-14T21:21:28.226809856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:28.227928 containerd[1514]: time="2025-07-14T21:21:28.226911057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:28.251078 systemd[1]: Started cri-containerd-a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4.scope - libcontainer container a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4. Jul 14 21:21:28.254733 systemd[1]: Started cri-containerd-719207eb1020861d1a10481c50fc4d01538a676f0d31a7733c3a229394c1154b.scope - libcontainer container 719207eb1020861d1a10481c50fc4d01538a676f0d31a7733c3a229394c1154b. Jul 14 21:21:28.279835 containerd[1514]: time="2025-07-14T21:21:28.279762692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hg56m,Uid:6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\"" Jul 14 21:21:28.280596 kubelet[2677]: E0714 21:21:28.280566 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:28.282135 containerd[1514]: time="2025-07-14T21:21:28.281800695Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 21:21:28.285795 containerd[1514]: time="2025-07-14T21:21:28.285757437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tm6ml,Uid:76f6d885-db39-4c5a-bfa2-e4cf7cf3013b,Namespace:kube-system,Attempt:0,} returns sandbox id \"719207eb1020861d1a10481c50fc4d01538a676f0d31a7733c3a229394c1154b\"" Jul 14 21:21:28.287450 kubelet[2677]: E0714 21:21:28.287420 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:28.289215 containerd[1514]: time="2025-07-14T21:21:28.289180295Z" level=info msg="CreateContainer within sandbox \"719207eb1020861d1a10481c50fc4d01538a676f0d31a7733c3a229394c1154b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 21:21:28.314721 kubelet[2677]: E0714 21:21:28.314695 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:28.315083 containerd[1514]: time="2025-07-14T21:21:28.315025276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fpctd,Uid:746e776b-60c4-4008-ad9e-ba1cfe05381d,Namespace:kube-system,Attempt:0,}" Jul 14 21:21:28.340718 kubelet[2677]: E0714 21:21:28.340684 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:28.398839 containerd[1514]: time="2025-07-14T21:21:28.398562396Z" level=info msg="CreateContainer within sandbox \"719207eb1020861d1a10481c50fc4d01538a676f0d31a7733c3a229394c1154b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9678e83ba75a0c2b16151e445b4cda2b70a34d5baa1b9b76f782f3f6e62c11a5\"" Jul 14 21:21:28.400773 containerd[1514]: time="2025-07-14T21:21:28.399265698Z" level=info msg="StartContainer for \"9678e83ba75a0c2b16151e445b4cda2b70a34d5baa1b9b76f782f3f6e62c11a5\"" Jul 14 21:21:28.408420 containerd[1514]: time="2025-07-14T21:21:28.408296464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:21:28.408593 containerd[1514]: time="2025-07-14T21:21:28.408546755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:21:28.408730 containerd[1514]: time="2025-07-14T21:21:28.408696827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:28.408961 containerd[1514]: time="2025-07-14T21:21:28.408909467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:28.437083 systemd[1]: Started cri-containerd-9678e83ba75a0c2b16151e445b4cda2b70a34d5baa1b9b76f782f3f6e62c11a5.scope - libcontainer container 9678e83ba75a0c2b16151e445b4cda2b70a34d5baa1b9b76f782f3f6e62c11a5. Jul 14 21:21:28.438799 systemd[1]: Started cri-containerd-ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0.scope - libcontainer container ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0. Jul 14 21:21:28.485388 containerd[1514]: time="2025-07-14T21:21:28.484887279Z" level=info msg="StartContainer for \"9678e83ba75a0c2b16151e445b4cda2b70a34d5baa1b9b76f782f3f6e62c11a5\" returns successfully" Jul 14 21:21:28.498465 containerd[1514]: time="2025-07-14T21:21:28.498410555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fpctd,Uid:746e776b-60c4-4008-ad9e-ba1cfe05381d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0\"" Jul 14 21:21:28.499792 kubelet[2677]: E0714 21:21:28.499257 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:29.156475 kubelet[2677]: E0714 21:21:29.156419 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:29.158691 kubelet[2677]: E0714 21:21:29.158616 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:29.167613 kubelet[2677]: I0714 21:21:29.167535 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tm6ml" podStartSLOduration=2.167509981 podStartE2EDuration="2.167509981s" podCreationTimestamp="2025-07-14 21:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:21:29.167490204 +0000 UTC m=+7.126546725" watchObservedRunningTime="2025-07-14 21:21:29.167509981 +0000 UTC m=+7.126566503" Jul 14 21:21:30.160376 kubelet[2677]: E0714 21:21:30.160334 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:34.008698 kubelet[2677]: E0714 21:21:34.008659 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:41.373110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117384965.mount: Deactivated successfully. Jul 14 21:21:44.829979 containerd[1514]: time="2025-07-14T21:21:44.829836742Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:44.833083 containerd[1514]: time="2025-07-14T21:21:44.833018926Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 14 21:21:44.838693 containerd[1514]: time="2025-07-14T21:21:44.838636013Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:44.840268 containerd[1514]: time="2025-07-14T21:21:44.840218374Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.558347546s" Jul 14 21:21:44.840268 containerd[1514]: time="2025-07-14T21:21:44.840255293Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 14 21:21:44.841637 containerd[1514]: time="2025-07-14T21:21:44.841608794Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 21:21:44.843871 containerd[1514]: time="2025-07-14T21:21:44.843816929Z" level=info msg="CreateContainer within sandbox \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:21:44.903609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3046935705.mount: Deactivated successfully. Jul 14 21:21:44.925222 containerd[1514]: time="2025-07-14T21:21:44.925124999Z" level=info msg="CreateContainer within sandbox \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead\"" Jul 14 21:21:44.927019 containerd[1514]: time="2025-07-14T21:21:44.926672084Z" level=info msg="StartContainer for \"95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead\"" Jul 14 21:21:44.967126 systemd[1]: Started cri-containerd-95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead.scope - libcontainer container 95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead. Jul 14 21:21:44.999734 containerd[1514]: time="2025-07-14T21:21:44.999677033Z" level=info msg="StartContainer for \"95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead\" returns successfully" Jul 14 21:21:45.010000 systemd[1]: cri-containerd-95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead.scope: Deactivated successfully. Jul 14 21:21:45.756618 kubelet[2677]: E0714 21:21:45.756544 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:45.899637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead-rootfs.mount: Deactivated successfully. Jul 14 21:21:46.189218 containerd[1514]: time="2025-07-14T21:21:46.189122429Z" level=info msg="shim disconnected" id=95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead namespace=k8s.io Jul 14 21:21:46.189218 containerd[1514]: time="2025-07-14T21:21:46.189207640Z" level=warning msg="cleaning up after shim disconnected" id=95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead namespace=k8s.io Jul 14 21:21:46.189218 containerd[1514]: time="2025-07-14T21:21:46.189223479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:21:46.760076 kubelet[2677]: E0714 21:21:46.760031 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:46.762504 containerd[1514]: time="2025-07-14T21:21:46.762431473Z" level=info msg="CreateContainer within sandbox \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 21:21:47.717226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620915382.mount: Deactivated successfully. Jul 14 21:21:47.725011 containerd[1514]: time="2025-07-14T21:21:47.724958721Z" level=info msg="CreateContainer within sandbox \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a\"" Jul 14 21:21:47.725715 containerd[1514]: time="2025-07-14T21:21:47.725659867Z" level=info msg="StartContainer for \"bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a\"" Jul 14 21:21:47.766160 systemd[1]: Started cri-containerd-bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a.scope - libcontainer container bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a. Jul 14 21:21:47.796462 containerd[1514]: time="2025-07-14T21:21:47.796401164Z" level=info msg="StartContainer for \"bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a\" returns successfully" Jul 14 21:21:47.815167 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:21:47.815853 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:21:47.816264 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:21:47.825315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:21:47.829099 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 21:21:47.829823 systemd[1]: cri-containerd-bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a.scope: Deactivated successfully. Jul 14 21:21:47.858288 containerd[1514]: time="2025-07-14T21:21:47.858196908Z" level=info msg="shim disconnected" id=bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a namespace=k8s.io Jul 14 21:21:47.858766 containerd[1514]: time="2025-07-14T21:21:47.858565160Z" level=warning msg="cleaning up after shim disconnected" id=bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a namespace=k8s.io Jul 14 21:21:47.858766 containerd[1514]: time="2025-07-14T21:21:47.858586280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:21:47.863479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:21:47.877700 containerd[1514]: time="2025-07-14T21:21:47.877620641Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:21:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 14 21:21:47.957802 systemd[1]: Started sshd@9-10.0.0.72:22-10.0.0.1:40288.service - OpenSSH per-connection server daemon (10.0.0.1:40288). Jul 14 21:21:48.024297 sshd[3211]: Accepted publickey for core from 10.0.0.1 port 40288 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:21:48.026408 sshd-session[3211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:48.032265 systemd-logind[1501]: New session 10 of user core. Jul 14 21:21:48.038175 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 21:21:48.182367 sshd[3213]: Connection closed by 10.0.0.1 port 40288 Jul 14 21:21:48.183002 sshd-session[3211]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:48.188277 systemd[1]: sshd@9-10.0.0.72:22-10.0.0.1:40288.service: Deactivated successfully. Jul 14 21:21:48.190567 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 21:21:48.191754 systemd-logind[1501]: Session 10 logged out. Waiting for processes to exit. Jul 14 21:21:48.193474 systemd-logind[1501]: Removed session 10. Jul 14 21:21:48.714180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a-rootfs.mount: Deactivated successfully. Jul 14 21:21:48.769066 kubelet[2677]: E0714 21:21:48.769018 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:48.774215 containerd[1514]: time="2025-07-14T21:21:48.773994988Z" level=info msg="CreateContainer within sandbox \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 21:21:48.801187 containerd[1514]: time="2025-07-14T21:21:48.801104327Z" level=info msg="CreateContainer within sandbox \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972\"" Jul 14 21:21:48.802071 containerd[1514]: time="2025-07-14T21:21:48.802021628Z" level=info msg="StartContainer for \"58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972\"" Jul 14 21:21:48.856265 systemd[1]: Started cri-containerd-58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972.scope - libcontainer container 58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972. Jul 14 21:21:48.900268 containerd[1514]: time="2025-07-14T21:21:48.900125121Z" level=info msg="StartContainer for \"58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972\" returns successfully" Jul 14 21:21:48.902234 systemd[1]: cri-containerd-58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972.scope: Deactivated successfully. Jul 14 21:21:48.936817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972-rootfs.mount: Deactivated successfully. Jul 14 21:21:49.185964 containerd[1514]: time="2025-07-14T21:21:49.185842572Z" level=info msg="shim disconnected" id=58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972 namespace=k8s.io Jul 14 21:21:49.185964 containerd[1514]: time="2025-07-14T21:21:49.185947178Z" level=warning msg="cleaning up after shim disconnected" id=58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972 namespace=k8s.io Jul 14 21:21:49.185964 containerd[1514]: time="2025-07-14T21:21:49.185959721Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:21:49.299121 containerd[1514]: time="2025-07-14T21:21:49.299042242Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:49.299938 containerd[1514]: time="2025-07-14T21:21:49.299855588Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 14 21:21:49.301315 containerd[1514]: time="2025-07-14T21:21:49.301237392Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:21:49.303011 containerd[1514]: time="2025-07-14T21:21:49.302970104Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.46132927s" Jul 14 21:21:49.303011 containerd[1514]: time="2025-07-14T21:21:49.303007144Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 14 21:21:49.307397 containerd[1514]: time="2025-07-14T21:21:49.306964613Z" level=info msg="CreateContainer within sandbox \"ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 21:21:49.323401 containerd[1514]: time="2025-07-14T21:21:49.323291799Z" level=info msg="CreateContainer within sandbox \"ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\"" Jul 14 21:21:49.324162 containerd[1514]: time="2025-07-14T21:21:49.324048619Z" level=info msg="StartContainer for \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\"" Jul 14 21:21:49.356216 systemd[1]: Started cri-containerd-74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99.scope - libcontainer container 74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99. Jul 14 21:21:49.391359 containerd[1514]: time="2025-07-14T21:21:49.391293741Z" level=info msg="StartContainer for \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\" returns successfully" Jul 14 21:21:49.776828 kubelet[2677]: E0714 21:21:49.776779 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:49.780198 kubelet[2677]: E0714 21:21:49.780153 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:49.781250 containerd[1514]: time="2025-07-14T21:21:49.781207484Z" level=info msg="CreateContainer within sandbox \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 21:21:49.943579 containerd[1514]: time="2025-07-14T21:21:49.943497548Z" level=info msg="CreateContainer within sandbox \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5\"" Jul 14 21:21:49.945234 containerd[1514]: time="2025-07-14T21:21:49.945193591Z" level=info msg="StartContainer for \"39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5\"" Jul 14 21:21:50.009161 systemd[1]: Started cri-containerd-39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5.scope - libcontainer container 39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5. Jul 14 21:21:50.062191 systemd[1]: cri-containerd-39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5.scope: Deactivated successfully. Jul 14 21:21:50.065964 containerd[1514]: time="2025-07-14T21:21:50.063724864Z" level=info msg="StartContainer for \"39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5\" returns successfully" Jul 14 21:21:50.094715 containerd[1514]: time="2025-07-14T21:21:50.094632245Z" level=info msg="shim disconnected" id=39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5 namespace=k8s.io Jul 14 21:21:50.094715 containerd[1514]: time="2025-07-14T21:21:50.094694061Z" level=warning msg="cleaning up after shim disconnected" id=39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5 namespace=k8s.io Jul 14 21:21:50.094715 containerd[1514]: time="2025-07-14T21:21:50.094706144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:21:50.133392 containerd[1514]: time="2025-07-14T21:21:50.133325561Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:21:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 14 21:21:50.714048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5-rootfs.mount: Deactivated successfully. Jul 14 21:21:50.784917 kubelet[2677]: E0714 21:21:50.784603 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:50.784917 kubelet[2677]: E0714 21:21:50.784603 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:50.786374 containerd[1514]: time="2025-07-14T21:21:50.786318739Z" level=info msg="CreateContainer within sandbox \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 21:21:50.803050 kubelet[2677]: I0714 21:21:50.802969 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-fpctd" podStartSLOduration=2.998878939 podStartE2EDuration="23.802941489s" podCreationTimestamp="2025-07-14 21:21:27 +0000 UTC" firstStartedPulling="2025-07-14 21:21:28.500376612 +0000 UTC m=+6.459433133" lastFinishedPulling="2025-07-14 21:21:49.304439162 +0000 UTC m=+27.263495683" observedRunningTime="2025-07-14 21:21:49.963690681 +0000 UTC m=+27.922747202" watchObservedRunningTime="2025-07-14 21:21:50.802941489 +0000 UTC m=+28.761998010" Jul 14 21:21:50.807886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1669279964.mount: Deactivated successfully. Jul 14 21:21:50.808932 containerd[1514]: time="2025-07-14T21:21:50.808848376Z" level=info msg="CreateContainer within sandbox \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\"" Jul 14 21:21:50.809709 containerd[1514]: time="2025-07-14T21:21:50.809489610Z" level=info msg="StartContainer for \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\"" Jul 14 21:21:50.849065 systemd[1]: Started cri-containerd-a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a.scope - libcontainer container a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a. Jul 14 21:21:50.891956 containerd[1514]: time="2025-07-14T21:21:50.891875223Z" level=info msg="StartContainer for \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\" returns successfully" Jul 14 21:21:51.126824 kubelet[2677]: I0714 21:21:51.126785 2677 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 21:21:51.159318 systemd[1]: Created slice kubepods-burstable-pode3cb8b78_4c65_4dd2_b312_173cf90b937e.slice - libcontainer container kubepods-burstable-pode3cb8b78_4c65_4dd2_b312_173cf90b937e.slice. Jul 14 21:21:51.168877 systemd[1]: Created slice kubepods-burstable-pod8a16962a_ba4e_4417_b227_f8deb1da4cee.slice - libcontainer container kubepods-burstable-pod8a16962a_ba4e_4417_b227_f8deb1da4cee.slice. Jul 14 21:21:51.290047 kubelet[2677]: I0714 21:21:51.289972 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq5s5\" (UniqueName: \"kubernetes.io/projected/e3cb8b78-4c65-4dd2-b312-173cf90b937e-kube-api-access-dq5s5\") pod \"coredns-7c65d6cfc9-qnb82\" (UID: \"e3cb8b78-4c65-4dd2-b312-173cf90b937e\") " pod="kube-system/coredns-7c65d6cfc9-qnb82" Jul 14 21:21:51.290047 kubelet[2677]: I0714 21:21:51.290025 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7bv\" (UniqueName: \"kubernetes.io/projected/8a16962a-ba4e-4417-b227-f8deb1da4cee-kube-api-access-hj7bv\") pod \"coredns-7c65d6cfc9-6vbsq\" (UID: \"8a16962a-ba4e-4417-b227-f8deb1da4cee\") " pod="kube-system/coredns-7c65d6cfc9-6vbsq" Jul 14 21:21:51.290047 kubelet[2677]: I0714 21:21:51.290048 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3cb8b78-4c65-4dd2-b312-173cf90b937e-config-volume\") pod \"coredns-7c65d6cfc9-qnb82\" (UID: \"e3cb8b78-4c65-4dd2-b312-173cf90b937e\") " pod="kube-system/coredns-7c65d6cfc9-qnb82" Jul 14 21:21:51.290047 kubelet[2677]: I0714 21:21:51.290065 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a16962a-ba4e-4417-b227-f8deb1da4cee-config-volume\") pod \"coredns-7c65d6cfc9-6vbsq\" (UID: \"8a16962a-ba4e-4417-b227-f8deb1da4cee\") " pod="kube-system/coredns-7c65d6cfc9-6vbsq" Jul 14 21:21:51.466179 kubelet[2677]: E0714 21:21:51.466001 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:51.468650 containerd[1514]: time="2025-07-14T21:21:51.468591166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qnb82,Uid:e3cb8b78-4c65-4dd2-b312-173cf90b937e,Namespace:kube-system,Attempt:0,}" Jul 14 21:21:51.472826 kubelet[2677]: E0714 21:21:51.472773 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:51.473522 containerd[1514]: time="2025-07-14T21:21:51.473484180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vbsq,Uid:8a16962a-ba4e-4417-b227-f8deb1da4cee,Namespace:kube-system,Attempt:0,}" Jul 14 21:21:51.718317 systemd[1]: run-containerd-runc-k8s.io-a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a-runc.l9TZKK.mount: Deactivated successfully. Jul 14 21:21:51.788976 kubelet[2677]: E0714 21:21:51.788922 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:52.791258 kubelet[2677]: E0714 21:21:52.791198 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:53.198194 systemd[1]: Started sshd@10-10.0.0.72:22-10.0.0.1:49998.service - OpenSSH per-connection server daemon (10.0.0.1:49998). Jul 14 21:21:53.244045 systemd-networkd[1443]: cilium_host: Link UP Jul 14 21:21:53.244298 systemd-networkd[1443]: cilium_net: Link UP Jul 14 21:21:53.244620 systemd-networkd[1443]: cilium_net: Gained carrier Jul 14 21:21:53.244886 systemd-networkd[1443]: cilium_host: Gained carrier Jul 14 21:21:53.253096 sshd[3533]: Accepted publickey for core from 10.0.0.1 port 49998 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:21:53.255212 sshd-session[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:53.260808 systemd-logind[1501]: New session 11 of user core. Jul 14 21:21:53.269177 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 21:21:53.376799 systemd-networkd[1443]: cilium_vxlan: Link UP Jul 14 21:21:53.376814 systemd-networkd[1443]: cilium_vxlan: Gained carrier Jul 14 21:21:53.419646 sshd[3556]: Connection closed by 10.0.0.1 port 49998 Jul 14 21:21:53.420080 sshd-session[3533]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:53.426225 systemd-logind[1501]: Session 11 logged out. Waiting for processes to exit. Jul 14 21:21:53.427095 systemd[1]: sshd@10-10.0.0.72:22-10.0.0.1:49998.service: Deactivated successfully. Jul 14 21:21:53.430616 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 21:21:53.432146 systemd-logind[1501]: Removed session 11. Jul 14 21:21:53.442134 systemd-networkd[1443]: cilium_net: Gained IPv6LL Jul 14 21:21:53.442584 systemd-networkd[1443]: cilium_host: Gained IPv6LL Jul 14 21:21:53.630957 kernel: NET: Registered PF_ALG protocol family Jul 14 21:21:53.792464 kubelet[2677]: E0714 21:21:53.792416 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:54.403862 systemd-networkd[1443]: lxc_health: Link UP Jul 14 21:21:54.411580 systemd-networkd[1443]: lxc_health: Gained carrier Jul 14 21:21:54.560944 kernel: eth0: renamed from tmpda6a9 Jul 14 21:21:54.575539 systemd-networkd[1443]: lxc8dac3557a95c: Link UP Jul 14 21:21:54.577424 systemd-networkd[1443]: lxc8dac3557a95c: Gained carrier Jul 14 21:21:54.600938 kernel: eth0: renamed from tmpa29f3 Jul 14 21:21:54.605213 systemd-networkd[1443]: lxcf4a5d11c4b38: Link UP Jul 14 21:21:54.607008 systemd-networkd[1443]: lxcf4a5d11c4b38: Gained carrier Jul 14 21:21:55.314222 systemd-networkd[1443]: cilium_vxlan: Gained IPv6LL Jul 14 21:21:55.827714 systemd-networkd[1443]: lxcf4a5d11c4b38: Gained IPv6LL Jul 14 21:21:55.954094 systemd-networkd[1443]: lxc8dac3557a95c: Gained IPv6LL Jul 14 21:21:56.129974 kubelet[2677]: E0714 21:21:56.129630 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:56.150742 kubelet[2677]: I0714 21:21:56.149986 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hg56m" podStartSLOduration=12.590039577 podStartE2EDuration="29.149961049s" podCreationTimestamp="2025-07-14 21:21:27 +0000 UTC" firstStartedPulling="2025-07-14 21:21:28.281415862 +0000 UTC m=+6.240472383" lastFinishedPulling="2025-07-14 21:21:44.841337334 +0000 UTC m=+22.800393855" observedRunningTime="2025-07-14 21:21:51.862439321 +0000 UTC m=+29.821495852" watchObservedRunningTime="2025-07-14 21:21:56.149961049 +0000 UTC m=+34.109017580" Jul 14 21:21:56.274191 systemd-networkd[1443]: lxc_health: Gained IPv6LL Jul 14 21:21:56.798362 kubelet[2677]: E0714 21:21:56.798310 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:58.392212 containerd[1514]: time="2025-07-14T21:21:58.392036567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:21:58.392212 containerd[1514]: time="2025-07-14T21:21:58.392104063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:21:58.392212 containerd[1514]: time="2025-07-14T21:21:58.392117569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:58.392651 containerd[1514]: time="2025-07-14T21:21:58.392214591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:58.419131 systemd[1]: Started cri-containerd-da6a9d78799dc3dc59512bf63dde3e416bd49ef337344aa92d1c61a280b626e1.scope - libcontainer container da6a9d78799dc3dc59512bf63dde3e416bd49ef337344aa92d1c61a280b626e1. Jul 14 21:21:58.434172 systemd[1]: Started sshd@11-10.0.0.72:22-10.0.0.1:50000.service - OpenSSH per-connection server daemon (10.0.0.1:50000). Jul 14 21:21:58.439183 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:21:58.466958 containerd[1514]: time="2025-07-14T21:21:58.466912267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qnb82,Uid:e3cb8b78-4c65-4dd2-b312-173cf90b937e,Namespace:kube-system,Attempt:0,} returns sandbox id \"da6a9d78799dc3dc59512bf63dde3e416bd49ef337344aa92d1c61a280b626e1\"" Jul 14 21:21:58.467705 kubelet[2677]: E0714 21:21:58.467683 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:58.469435 containerd[1514]: time="2025-07-14T21:21:58.469398833Z" level=info msg="CreateContainer within sandbox \"da6a9d78799dc3dc59512bf63dde3e416bd49ef337344aa92d1c61a280b626e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:21:58.502687 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 50000 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:21:58.504719 sshd-session[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:58.509633 systemd-logind[1501]: New session 12 of user core. Jul 14 21:21:58.517036 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 21:21:58.573169 containerd[1514]: time="2025-07-14T21:21:58.572034103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:21:58.573169 containerd[1514]: time="2025-07-14T21:21:58.572936226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:21:58.573169 containerd[1514]: time="2025-07-14T21:21:58.572954771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:58.573169 containerd[1514]: time="2025-07-14T21:21:58.573070959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:21:58.598167 systemd[1]: Started cri-containerd-a29f3603073e95b791dbe5d7e04238d8ac6bc987e43fffef536a698e983cf297.scope - libcontainer container a29f3603073e95b791dbe5d7e04238d8ac6bc987e43fffef536a698e983cf297. Jul 14 21:21:58.611657 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:21:58.643636 containerd[1514]: time="2025-07-14T21:21:58.643484537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vbsq,Uid:8a16962a-ba4e-4417-b227-f8deb1da4cee,Namespace:kube-system,Attempt:0,} returns sandbox id \"a29f3603073e95b791dbe5d7e04238d8ac6bc987e43fffef536a698e983cf297\"" Jul 14 21:21:58.644791 kubelet[2677]: E0714 21:21:58.644738 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:58.646443 containerd[1514]: time="2025-07-14T21:21:58.646353220Z" level=info msg="CreateContainer within sandbox \"a29f3603073e95b791dbe5d7e04238d8ac6bc987e43fffef536a698e983cf297\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:21:58.657294 sshd[3977]: Connection closed by 10.0.0.1 port 50000 Jul 14 21:21:58.657786 sshd-session[3968]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:58.662535 systemd[1]: sshd@11-10.0.0.72:22-10.0.0.1:50000.service: Deactivated successfully. Jul 14 21:21:58.665015 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 21:21:58.666145 systemd-logind[1501]: Session 12 logged out. Waiting for processes to exit. Jul 14 21:21:58.667478 systemd-logind[1501]: Removed session 12. Jul 14 21:21:59.396204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373268642.mount: Deactivated successfully. Jul 14 21:21:59.553486 containerd[1514]: time="2025-07-14T21:21:59.553412720Z" level=info msg="CreateContainer within sandbox \"da6a9d78799dc3dc59512bf63dde3e416bd49ef337344aa92d1c61a280b626e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec1eb60095df693cdfef975e2624f5331ea1f26d56f5f316f681eda824e62d25\"" Jul 14 21:21:59.554100 containerd[1514]: time="2025-07-14T21:21:59.554061156Z" level=info msg="StartContainer for \"ec1eb60095df693cdfef975e2624f5331ea1f26d56f5f316f681eda824e62d25\"" Jul 14 21:21:59.571094 containerd[1514]: time="2025-07-14T21:21:59.571023214Z" level=info msg="CreateContainer within sandbox \"a29f3603073e95b791dbe5d7e04238d8ac6bc987e43fffef536a698e983cf297\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a3b60af4cc96de0f4a7f917f7ce21acc76041912668290d887925603857cb7d\"" Jul 14 21:21:59.571926 containerd[1514]: time="2025-07-14T21:21:59.571874011Z" level=info msg="StartContainer for \"8a3b60af4cc96de0f4a7f917f7ce21acc76041912668290d887925603857cb7d\"" Jul 14 21:21:59.585663 systemd[1]: run-containerd-runc-k8s.io-ec1eb60095df693cdfef975e2624f5331ea1f26d56f5f316f681eda824e62d25-runc.XWOOow.mount: Deactivated successfully. Jul 14 21:21:59.596413 systemd[1]: Started cri-containerd-ec1eb60095df693cdfef975e2624f5331ea1f26d56f5f316f681eda824e62d25.scope - libcontainer container ec1eb60095df693cdfef975e2624f5331ea1f26d56f5f316f681eda824e62d25. Jul 14 21:21:59.618379 systemd[1]: Started cri-containerd-8a3b60af4cc96de0f4a7f917f7ce21acc76041912668290d887925603857cb7d.scope - libcontainer container 8a3b60af4cc96de0f4a7f917f7ce21acc76041912668290d887925603857cb7d. Jul 14 21:21:59.880598 containerd[1514]: time="2025-07-14T21:21:59.880508264Z" level=info msg="StartContainer for \"ec1eb60095df693cdfef975e2624f5331ea1f26d56f5f316f681eda824e62d25\" returns successfully" Jul 14 21:21:59.880598 containerd[1514]: time="2025-07-14T21:21:59.880570631Z" level=info msg="StartContainer for \"8a3b60af4cc96de0f4a7f917f7ce21acc76041912668290d887925603857cb7d\" returns successfully" Jul 14 21:21:59.884578 kubelet[2677]: E0714 21:21:59.884288 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:00.886795 kubelet[2677]: E0714 21:22:00.886451 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:00.887684 kubelet[2677]: E0714 21:22:00.887657 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:00.901171 kubelet[2677]: I0714 21:22:00.901045 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qnb82" podStartSLOduration=33.901017796 podStartE2EDuration="33.901017796s" podCreationTimestamp="2025-07-14 21:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:22:00.125147792 +0000 UTC m=+38.084204323" watchObservedRunningTime="2025-07-14 21:22:00.901017796 +0000 UTC m=+38.860074317" Jul 14 21:22:01.482535 kubelet[2677]: I0714 21:22:01.482122 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6vbsq" podStartSLOduration=34.482089769 podStartE2EDuration="34.482089769s" podCreationTimestamp="2025-07-14 21:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:22:00.901470696 +0000 UTC m=+38.860527217" watchObservedRunningTime="2025-07-14 21:22:01.482089769 +0000 UTC m=+39.441146290" Jul 14 21:22:01.889101 kubelet[2677]: E0714 21:22:01.889042 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:01.889634 kubelet[2677]: E0714 21:22:01.889196 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:02.891142 kubelet[2677]: E0714 21:22:02.891079 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:03.676156 systemd[1]: Started sshd@12-10.0.0.72:22-10.0.0.1:54662.service - OpenSSH per-connection server daemon (10.0.0.1:54662). Jul 14 21:22:03.723113 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 54662 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:03.725222 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:03.730976 systemd-logind[1501]: New session 13 of user core. Jul 14 21:22:03.749252 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 21:22:04.352651 sshd[4124]: Connection closed by 10.0.0.1 port 54662 Jul 14 21:22:04.353125 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:04.357826 systemd[1]: sshd@12-10.0.0.72:22-10.0.0.1:54662.service: Deactivated successfully. Jul 14 21:22:04.360425 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 21:22:04.361387 systemd-logind[1501]: Session 13 logged out. Waiting for processes to exit. Jul 14 21:22:04.362575 systemd-logind[1501]: Removed session 13. Jul 14 21:22:09.387286 systemd[1]: Started sshd@13-10.0.0.72:22-10.0.0.1:44916.service - OpenSSH per-connection server daemon (10.0.0.1:44916). Jul 14 21:22:09.423427 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 44916 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:09.425516 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:09.430139 systemd-logind[1501]: New session 14 of user core. Jul 14 21:22:09.440279 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 21:22:09.558078 sshd[4141]: Connection closed by 10.0.0.1 port 44916 Jul 14 21:22:09.558566 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:09.570592 systemd[1]: sshd@13-10.0.0.72:22-10.0.0.1:44916.service: Deactivated successfully. Jul 14 21:22:09.572983 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 21:22:09.574613 systemd-logind[1501]: Session 14 logged out. Waiting for processes to exit. Jul 14 21:22:09.585493 systemd[1]: Started sshd@14-10.0.0.72:22-10.0.0.1:44918.service - OpenSSH per-connection server daemon (10.0.0.1:44918). Jul 14 21:22:09.586714 systemd-logind[1501]: Removed session 14. Jul 14 21:22:09.629932 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 44918 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:09.631842 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:09.637259 systemd-logind[1501]: New session 15 of user core. Jul 14 21:22:09.647057 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 21:22:09.805669 sshd[4157]: Connection closed by 10.0.0.1 port 44918 Jul 14 21:22:09.805620 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:09.817770 systemd[1]: sshd@14-10.0.0.72:22-10.0.0.1:44918.service: Deactivated successfully. Jul 14 21:22:09.821510 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 21:22:09.825332 systemd-logind[1501]: Session 15 logged out. Waiting for processes to exit. Jul 14 21:22:09.833469 systemd[1]: Started sshd@15-10.0.0.72:22-10.0.0.1:44922.service - OpenSSH per-connection server daemon (10.0.0.1:44922). Jul 14 21:22:09.835052 systemd-logind[1501]: Removed session 15. Jul 14 21:22:09.878280 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 44922 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:09.880349 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:09.885871 systemd-logind[1501]: New session 16 of user core. Jul 14 21:22:09.895218 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 21:22:10.024426 sshd[4171]: Connection closed by 10.0.0.1 port 44922 Jul 14 21:22:10.024849 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:10.029335 systemd[1]: sshd@15-10.0.0.72:22-10.0.0.1:44922.service: Deactivated successfully. Jul 14 21:22:10.031560 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 21:22:10.032403 systemd-logind[1501]: Session 16 logged out. Waiting for processes to exit. Jul 14 21:22:10.033484 systemd-logind[1501]: Removed session 16. Jul 14 21:22:15.038690 systemd[1]: Started sshd@16-10.0.0.72:22-10.0.0.1:44928.service - OpenSSH per-connection server daemon (10.0.0.1:44928). Jul 14 21:22:15.079485 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 44928 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:15.081703 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:15.086621 systemd-logind[1501]: New session 17 of user core. Jul 14 21:22:15.098240 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 21:22:15.216849 sshd[4188]: Connection closed by 10.0.0.1 port 44928 Jul 14 21:22:15.217346 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:15.222220 systemd[1]: sshd@16-10.0.0.72:22-10.0.0.1:44928.service: Deactivated successfully. Jul 14 21:22:15.224531 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 21:22:15.225333 systemd-logind[1501]: Session 17 logged out. Waiting for processes to exit. Jul 14 21:22:15.226619 systemd-logind[1501]: Removed session 17. Jul 14 21:22:20.246239 systemd[1]: Started sshd@17-10.0.0.72:22-10.0.0.1:60240.service - OpenSSH per-connection server daemon (10.0.0.1:60240). Jul 14 21:22:20.282169 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 60240 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:20.284042 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:20.289204 systemd-logind[1501]: New session 18 of user core. Jul 14 21:22:20.300267 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 21:22:20.422281 sshd[4203]: Connection closed by 10.0.0.1 port 60240 Jul 14 21:22:20.422816 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:20.433622 systemd[1]: sshd@17-10.0.0.72:22-10.0.0.1:60240.service: Deactivated successfully. Jul 14 21:22:20.436131 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 21:22:20.438147 systemd-logind[1501]: Session 18 logged out. Waiting for processes to exit. Jul 14 21:22:20.449330 systemd[1]: Started sshd@18-10.0.0.72:22-10.0.0.1:60242.service - OpenSSH per-connection server daemon (10.0.0.1:60242). Jul 14 21:22:20.450699 systemd-logind[1501]: Removed session 18. Jul 14 21:22:20.490970 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 60242 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:20.492975 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:20.498679 systemd-logind[1501]: New session 19 of user core. Jul 14 21:22:20.511209 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 21:22:20.786971 sshd[4219]: Connection closed by 10.0.0.1 port 60242 Jul 14 21:22:20.787674 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:20.799960 systemd[1]: sshd@18-10.0.0.72:22-10.0.0.1:60242.service: Deactivated successfully. Jul 14 21:22:20.802697 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 21:22:20.804758 systemd-logind[1501]: Session 19 logged out. Waiting for processes to exit. Jul 14 21:22:20.815577 systemd[1]: Started sshd@19-10.0.0.72:22-10.0.0.1:60246.service - OpenSSH per-connection server daemon (10.0.0.1:60246). Jul 14 21:22:20.817065 systemd-logind[1501]: Removed session 19. Jul 14 21:22:20.857983 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 60246 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:20.859951 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:20.865820 systemd-logind[1501]: New session 20 of user core. Jul 14 21:22:20.872057 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 21:22:22.790734 sshd[4233]: Connection closed by 10.0.0.1 port 60246 Jul 14 21:22:22.791472 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:22.802751 systemd[1]: sshd@19-10.0.0.72:22-10.0.0.1:60246.service: Deactivated successfully. Jul 14 21:22:22.806122 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 21:22:22.809800 systemd-logind[1501]: Session 20 logged out. Waiting for processes to exit. Jul 14 21:22:22.817444 systemd[1]: Started sshd@20-10.0.0.72:22-10.0.0.1:60250.service - OpenSSH per-connection server daemon (10.0.0.1:60250). Jul 14 21:22:22.819506 systemd-logind[1501]: Removed session 20. Jul 14 21:22:22.859542 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 60250 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:22.861475 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:22.866732 systemd-logind[1501]: New session 21 of user core. Jul 14 21:22:22.874109 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 21:22:23.138068 sshd[4259]: Connection closed by 10.0.0.1 port 60250 Jul 14 21:22:23.138943 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:23.154967 systemd[1]: sshd@20-10.0.0.72:22-10.0.0.1:60250.service: Deactivated successfully. Jul 14 21:22:23.157478 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 21:22:23.158425 systemd-logind[1501]: Session 21 logged out. Waiting for processes to exit. Jul 14 21:22:23.166448 systemd[1]: Started sshd@21-10.0.0.72:22-10.0.0.1:60252.service - OpenSSH per-connection server daemon (10.0.0.1:60252). Jul 14 21:22:23.167298 systemd-logind[1501]: Removed session 21. Jul 14 21:22:23.205643 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 60252 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:23.207807 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:23.214449 systemd-logind[1501]: New session 22 of user core. Jul 14 21:22:23.222135 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 21:22:23.346012 sshd[4272]: Connection closed by 10.0.0.1 port 60252 Jul 14 21:22:23.346455 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:23.352511 systemd[1]: sshd@21-10.0.0.72:22-10.0.0.1:60252.service: Deactivated successfully. Jul 14 21:22:23.355922 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 21:22:23.356974 systemd-logind[1501]: Session 22 logged out. Waiting for processes to exit. Jul 14 21:22:23.358120 systemd-logind[1501]: Removed session 22. Jul 14 21:22:28.359833 systemd[1]: Started sshd@22-10.0.0.72:22-10.0.0.1:60262.service - OpenSSH per-connection server daemon (10.0.0.1:60262). Jul 14 21:22:28.405552 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 60262 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:28.407965 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:28.413256 systemd-logind[1501]: New session 23 of user core. Jul 14 21:22:28.424242 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 21:22:28.553080 sshd[4288]: Connection closed by 10.0.0.1 port 60262 Jul 14 21:22:28.553524 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:28.558099 systemd[1]: sshd@22-10.0.0.72:22-10.0.0.1:60262.service: Deactivated successfully. Jul 14 21:22:28.560726 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 21:22:28.561614 systemd-logind[1501]: Session 23 logged out. Waiting for processes to exit. Jul 14 21:22:28.562730 systemd-logind[1501]: Removed session 23. Jul 14 21:22:33.571061 systemd[1]: Started sshd@23-10.0.0.72:22-10.0.0.1:49112.service - OpenSSH per-connection server daemon (10.0.0.1:49112). Jul 14 21:22:33.610797 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 49112 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:33.612732 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:33.617822 systemd-logind[1501]: New session 24 of user core. Jul 14 21:22:33.628226 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 21:22:33.754423 sshd[4308]: Connection closed by 10.0.0.1 port 49112 Jul 14 21:22:33.754918 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:33.759802 systemd[1]: sshd@23-10.0.0.72:22-10.0.0.1:49112.service: Deactivated successfully. Jul 14 21:22:33.762845 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 21:22:33.764558 systemd-logind[1501]: Session 24 logged out. Waiting for processes to exit. Jul 14 21:22:33.766084 systemd-logind[1501]: Removed session 24. Jul 14 21:22:35.132290 kubelet[2677]: E0714 21:22:35.132236 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:38.769064 systemd[1]: Started sshd@24-10.0.0.72:22-10.0.0.1:49124.service - OpenSSH per-connection server daemon (10.0.0.1:49124). Jul 14 21:22:38.813141 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 49124 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:38.815617 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:38.820964 systemd-logind[1501]: New session 25 of user core. Jul 14 21:22:38.828112 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 21:22:38.951448 sshd[4323]: Connection closed by 10.0.0.1 port 49124 Jul 14 21:22:38.951876 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:38.956181 systemd[1]: sshd@24-10.0.0.72:22-10.0.0.1:49124.service: Deactivated successfully. Jul 14 21:22:38.958642 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 21:22:38.959677 systemd-logind[1501]: Session 25 logged out. Waiting for processes to exit. Jul 14 21:22:38.960754 systemd-logind[1501]: Removed session 25. Jul 14 21:22:43.966958 systemd[1]: Started sshd@25-10.0.0.72:22-10.0.0.1:38494.service - OpenSSH per-connection server daemon (10.0.0.1:38494). Jul 14 21:22:44.009044 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 38494 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:44.010745 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:44.015576 systemd-logind[1501]: New session 26 of user core. Jul 14 21:22:44.025189 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 14 21:22:44.143167 sshd[4339]: Connection closed by 10.0.0.1 port 38494 Jul 14 21:22:44.143582 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:44.164150 systemd[1]: sshd@25-10.0.0.72:22-10.0.0.1:38494.service: Deactivated successfully. Jul 14 21:22:44.166438 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 21:22:44.169106 systemd-logind[1501]: Session 26 logged out. Waiting for processes to exit. Jul 14 21:22:44.185321 systemd[1]: Started sshd@26-10.0.0.72:22-10.0.0.1:38502.service - OpenSSH per-connection server daemon (10.0.0.1:38502). Jul 14 21:22:44.186464 systemd-logind[1501]: Removed session 26. Jul 14 21:22:44.226989 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 38502 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:44.229110 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:44.234672 systemd-logind[1501]: New session 27 of user core. Jul 14 21:22:44.249186 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 14 21:22:45.602652 containerd[1514]: time="2025-07-14T21:22:45.602598335Z" level=info msg="StopContainer for \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\" with timeout 30 (s)" Jul 14 21:22:45.607764 containerd[1514]: time="2025-07-14T21:22:45.607371058Z" level=info msg="Stop container \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\" with signal terminated" Jul 14 21:22:45.621887 systemd[1]: cri-containerd-74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99.scope: Deactivated successfully. Jul 14 21:22:45.633917 containerd[1514]: time="2025-07-14T21:22:45.633798071Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:22:45.636680 containerd[1514]: time="2025-07-14T21:22:45.636647481Z" level=info msg="StopContainer for \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\" with timeout 2 (s)" Jul 14 21:22:45.637090 containerd[1514]: time="2025-07-14T21:22:45.637049545Z" level=info msg="Stop container \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\" with signal terminated" Jul 14 21:22:45.645329 systemd-networkd[1443]: lxc_health: Link DOWN Jul 14 21:22:45.645525 systemd-networkd[1443]: lxc_health: Lost carrier Jul 14 21:22:45.653869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99-rootfs.mount: Deactivated successfully. Jul 14 21:22:45.666062 containerd[1514]: time="2025-07-14T21:22:45.665962848Z" level=info msg="shim disconnected" id=74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99 namespace=k8s.io Jul 14 21:22:45.666062 containerd[1514]: time="2025-07-14T21:22:45.666035867Z" level=warning msg="cleaning up after shim disconnected" id=74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99 namespace=k8s.io Jul 14 21:22:45.666062 containerd[1514]: time="2025-07-14T21:22:45.666047720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:22:45.666444 systemd[1]: cri-containerd-a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a.scope: Deactivated successfully. Jul 14 21:22:45.666999 systemd[1]: cri-containerd-a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a.scope: Consumed 7.687s CPU time, 127.4M memory peak, 236K read from disk, 13.3M written to disk. Jul 14 21:22:45.687618 containerd[1514]: time="2025-07-14T21:22:45.687441294Z" level=info msg="StopContainer for \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\" returns successfully" Jul 14 21:22:45.694619 containerd[1514]: time="2025-07-14T21:22:45.694562006Z" level=info msg="StopPodSandbox for \"ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0\"" Jul 14 21:22:45.695411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a-rootfs.mount: Deactivated successfully. Jul 14 21:22:45.702015 containerd[1514]: time="2025-07-14T21:22:45.701930688Z" level=info msg="shim disconnected" id=a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a namespace=k8s.io Jul 14 21:22:45.702015 containerd[1514]: time="2025-07-14T21:22:45.702002996Z" level=warning msg="cleaning up after shim disconnected" id=a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a namespace=k8s.io Jul 14 21:22:45.702015 containerd[1514]: time="2025-07-14T21:22:45.702015830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:22:45.714666 containerd[1514]: time="2025-07-14T21:22:45.694636106Z" level=info msg="Container to stop \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:22:45.717401 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0-shm.mount: Deactivated successfully. Jul 14 21:22:45.722481 containerd[1514]: time="2025-07-14T21:22:45.722426688Z" level=info msg="StopContainer for \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\" returns successfully" Jul 14 21:22:45.723959 containerd[1514]: time="2025-07-14T21:22:45.723259811Z" level=info msg="StopPodSandbox for \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\"" Jul 14 21:22:45.723959 containerd[1514]: time="2025-07-14T21:22:45.723310797Z" level=info msg="Container to stop \"58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:22:45.723959 containerd[1514]: time="2025-07-14T21:22:45.723352877Z" level=info msg="Container to stop \"39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:22:45.723959 containerd[1514]: time="2025-07-14T21:22:45.723364229Z" level=info msg="Container to stop \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:22:45.723959 containerd[1514]: time="2025-07-14T21:22:45.723378906Z" level=info msg="Container to stop \"95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:22:45.723959 containerd[1514]: time="2025-07-14T21:22:45.723390579Z" level=info msg="Container to stop \"bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:22:45.723543 systemd[1]: cri-containerd-ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0.scope: Deactivated successfully. Jul 14 21:22:45.727219 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4-shm.mount: Deactivated successfully. Jul 14 21:22:45.738456 systemd[1]: cri-containerd-a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4.scope: Deactivated successfully. Jul 14 21:22:45.757562 containerd[1514]: time="2025-07-14T21:22:45.757465144Z" level=info msg="shim disconnected" id=ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0 namespace=k8s.io Jul 14 21:22:45.758347 containerd[1514]: time="2025-07-14T21:22:45.758318755Z" level=warning msg="cleaning up after shim disconnected" id=ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0 namespace=k8s.io Jul 14 21:22:45.758447 containerd[1514]: time="2025-07-14T21:22:45.758426719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:22:45.766830 containerd[1514]: time="2025-07-14T21:22:45.766743041Z" level=info msg="shim disconnected" id=a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4 namespace=k8s.io Jul 14 21:22:45.766830 containerd[1514]: time="2025-07-14T21:22:45.766813415Z" level=warning msg="cleaning up after shim disconnected" id=a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4 namespace=k8s.io Jul 14 21:22:45.766830 containerd[1514]: time="2025-07-14T21:22:45.766822592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:22:45.776997 containerd[1514]: time="2025-07-14T21:22:45.776950314Z" level=info msg="TearDown network for sandbox \"ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0\" successfully" Jul 14 21:22:45.777162 containerd[1514]: time="2025-07-14T21:22:45.777146206Z" level=info msg="StopPodSandbox for \"ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0\" returns successfully" Jul 14 21:22:45.786737 containerd[1514]: time="2025-07-14T21:22:45.786668909Z" level=info msg="TearDown network for sandbox \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\" successfully" Jul 14 21:22:45.786737 containerd[1514]: time="2025-07-14T21:22:45.786707201Z" level=info msg="StopPodSandbox for \"a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4\" returns successfully" Jul 14 21:22:45.949427 kubelet[2677]: I0714 21:22:45.949245 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-host-proc-sys-kernel\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.949427 kubelet[2677]: I0714 21:22:45.949300 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-etc-cni-netd\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.949427 kubelet[2677]: I0714 21:22:45.949321 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-xtables-lock\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.949427 kubelet[2677]: I0714 21:22:45.949334 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-host-proc-sys-net\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.949427 kubelet[2677]: I0714 21:22:45.949358 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8v8f\" (UniqueName: \"kubernetes.io/projected/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-kube-api-access-z8v8f\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.949427 kubelet[2677]: I0714 21:22:45.949375 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-hubble-tls\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.951074 kubelet[2677]: I0714 21:22:45.949394 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-lib-modules\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.951074 kubelet[2677]: I0714 21:22:45.949411 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7nql\" (UniqueName: \"kubernetes.io/projected/746e776b-60c4-4008-ad9e-ba1cfe05381d-kube-api-access-g7nql\") pod \"746e776b-60c4-4008-ad9e-ba1cfe05381d\" (UID: \"746e776b-60c4-4008-ad9e-ba1cfe05381d\") " Jul 14 21:22:45.951074 kubelet[2677]: I0714 21:22:45.949404 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:22:45.951074 kubelet[2677]: I0714 21:22:45.949426 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:22:45.951074 kubelet[2677]: I0714 21:22:45.949480 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:22:45.951262 kubelet[2677]: I0714 21:22:45.949511 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:22:45.951262 kubelet[2677]: I0714 21:22:45.949425 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cilium-cgroup\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.951262 kubelet[2677]: I0714 21:22:45.949572 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-hostproc\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.951262 kubelet[2677]: I0714 21:22:45.949598 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-clustermesh-secrets\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.951262 kubelet[2677]: I0714 21:22:45.949617 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/746e776b-60c4-4008-ad9e-ba1cfe05381d-cilium-config-path\") pod \"746e776b-60c4-4008-ad9e-ba1cfe05381d\" (UID: \"746e776b-60c4-4008-ad9e-ba1cfe05381d\") " Jul 14 21:22:45.951425 kubelet[2677]: I0714 21:22:45.949798 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:22:45.951425 kubelet[2677]: I0714 21:22:45.949632 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cilium-run\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.951425 kubelet[2677]: I0714 21:22:45.950278 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cni-path\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.951425 kubelet[2677]: I0714 21:22:45.950340 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-bpf-maps\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.951425 kubelet[2677]: I0714 21:22:45.950370 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cilium-config-path\") pod \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\" (UID: \"6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5\") " Jul 14 21:22:45.951425 kubelet[2677]: I0714 21:22:45.950452 2677 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:45.951633 kubelet[2677]: I0714 21:22:45.950509 2677 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:45.951633 kubelet[2677]: I0714 21:22:45.950523 2677 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:45.951633 kubelet[2677]: I0714 21:22:45.950537 2677 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:45.951633 kubelet[2677]: I0714 21:22:45.950578 2677 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:45.952153 kubelet[2677]: I0714 21:22:45.952114 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:22:45.954531 kubelet[2677]: I0714 21:22:45.954285 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-hostproc" (OuterVolumeSpecName: "hostproc") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:22:45.954631 kubelet[2677]: I0714 21:22:45.954519 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:22:45.954631 kubelet[2677]: I0714 21:22:45.954579 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cni-path" (OuterVolumeSpecName: "cni-path") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:22:45.954631 kubelet[2677]: I0714 21:22:45.954604 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:22:45.955642 kubelet[2677]: I0714 21:22:45.955553 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746e776b-60c4-4008-ad9e-ba1cfe05381d-kube-api-access-g7nql" (OuterVolumeSpecName: "kube-api-access-g7nql") pod "746e776b-60c4-4008-ad9e-ba1cfe05381d" (UID: "746e776b-60c4-4008-ad9e-ba1cfe05381d"). InnerVolumeSpecName "kube-api-access-g7nql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 21:22:45.955642 kubelet[2677]: I0714 21:22:45.955570 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-kube-api-access-z8v8f" (OuterVolumeSpecName: "kube-api-access-z8v8f") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "kube-api-access-z8v8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 21:22:45.957760 kubelet[2677]: I0714 21:22:45.957675 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 21:22:45.957951 kubelet[2677]: I0714 21:22:45.957922 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 21:22:45.958093 kubelet[2677]: I0714 21:22:45.958064 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/746e776b-60c4-4008-ad9e-ba1cfe05381d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "746e776b-60c4-4008-ad9e-ba1cfe05381d" (UID: "746e776b-60c4-4008-ad9e-ba1cfe05381d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 21:22:45.959447 kubelet[2677]: I0714 21:22:45.959416 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" (UID: "6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 21:22:45.987037 kubelet[2677]: I0714 21:22:45.987004 2677 scope.go:117] "RemoveContainer" containerID="a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a" Jul 14 21:22:45.993306 containerd[1514]: time="2025-07-14T21:22:45.993267151Z" level=info msg="RemoveContainer for \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\"" Jul 14 21:22:45.994540 systemd[1]: Removed slice kubepods-burstable-pod6eb46e6c_a908_4286_9cd2_b7f9d9c52ed5.slice - libcontainer container kubepods-burstable-pod6eb46e6c_a908_4286_9cd2_b7f9d9c52ed5.slice. Jul 14 21:22:45.994654 systemd[1]: kubepods-burstable-pod6eb46e6c_a908_4286_9cd2_b7f9d9c52ed5.slice: Consumed 7.811s CPU time, 127.7M memory peak, 264K read from disk, 13.3M written to disk. Jul 14 21:22:45.998621 systemd[1]: Removed slice kubepods-besteffort-pod746e776b_60c4_4008_ad9e_ba1cfe05381d.slice - libcontainer container kubepods-besteffort-pod746e776b_60c4_4008_ad9e_ba1cfe05381d.slice. Jul 14 21:22:46.001548 containerd[1514]: time="2025-07-14T21:22:46.001517517Z" level=info msg="RemoveContainer for \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\" returns successfully" Jul 14 21:22:46.001832 kubelet[2677]: I0714 21:22:46.001807 2677 scope.go:117] "RemoveContainer" containerID="39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5" Jul 14 21:22:46.004275 containerd[1514]: time="2025-07-14T21:22:46.004234125Z" level=info msg="RemoveContainer for \"39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5\"" Jul 14 21:22:46.008025 containerd[1514]: time="2025-07-14T21:22:46.007994102Z" level=info msg="RemoveContainer for \"39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5\" returns successfully" Jul 14 21:22:46.008164 kubelet[2677]: I0714 21:22:46.008143 2677 scope.go:117] "RemoveContainer" containerID="58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972" Jul 14 21:22:46.009504 containerd[1514]: time="2025-07-14T21:22:46.009453262Z" level=info msg="RemoveContainer for \"58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972\"" Jul 14 21:22:46.013355 containerd[1514]: time="2025-07-14T21:22:46.013260130Z" level=info msg="RemoveContainer for \"58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972\" returns successfully" Jul 14 21:22:46.013528 kubelet[2677]: I0714 21:22:46.013462 2677 scope.go:117] "RemoveContainer" containerID="bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a" Jul 14 21:22:46.014564 containerd[1514]: time="2025-07-14T21:22:46.014521524Z" level=info msg="RemoveContainer for \"bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a\"" Jul 14 21:22:46.020052 containerd[1514]: time="2025-07-14T21:22:46.019842425Z" level=info msg="RemoveContainer for \"bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a\" returns successfully" Jul 14 21:22:46.020211 kubelet[2677]: I0714 21:22:46.020081 2677 scope.go:117] "RemoveContainer" containerID="95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead" Jul 14 21:22:46.021442 containerd[1514]: time="2025-07-14T21:22:46.021415542Z" level=info msg="RemoveContainer for \"95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead\"" Jul 14 21:22:46.024675 containerd[1514]: time="2025-07-14T21:22:46.024645393Z" level=info msg="RemoveContainer for \"95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead\" returns successfully" Jul 14 21:22:46.024920 kubelet[2677]: I0714 21:22:46.024868 2677 scope.go:117] "RemoveContainer" containerID="a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a" Jul 14 21:22:46.025268 containerd[1514]: time="2025-07-14T21:22:46.025213421Z" level=error msg="ContainerStatus for \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\": not found" Jul 14 21:22:46.033358 kubelet[2677]: E0714 21:22:46.033314 2677 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\": not found" containerID="a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a" Jul 14 21:22:46.033446 kubelet[2677]: I0714 21:22:46.033354 2677 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a"} err="failed to get container status \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\": rpc error: code = NotFound desc = an error occurred when try to find container \"a94c9d4a2ee67bd71683000c88eb757220a1981203d21e870828d3cc29d5182a\": not found" Jul 14 21:22:46.033479 kubelet[2677]: I0714 21:22:46.033447 2677 scope.go:117] "RemoveContainer" containerID="39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5" Jul 14 21:22:46.033700 containerd[1514]: time="2025-07-14T21:22:46.033649036Z" level=error msg="ContainerStatus for \"39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5\": not found" Jul 14 21:22:46.033793 kubelet[2677]: E0714 21:22:46.033767 2677 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5\": not found" containerID="39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5" Jul 14 21:22:46.033793 kubelet[2677]: I0714 21:22:46.033788 2677 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5"} err="failed to get container status \"39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"39af0d9c37c73c0bdb493c81fe5ff17fd3e69ed8b8f3b6641743e363459544a5\": not found" Jul 14 21:22:46.033879 kubelet[2677]: I0714 21:22:46.033801 2677 scope.go:117] "RemoveContainer" containerID="58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972" Jul 14 21:22:46.033984 containerd[1514]: time="2025-07-14T21:22:46.033957090Z" level=error msg="ContainerStatus for \"58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972\": not found" Jul 14 21:22:46.034186 kubelet[2677]: E0714 21:22:46.034163 2677 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972\": not found" containerID="58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972" Jul 14 21:22:46.034186 kubelet[2677]: I0714 21:22:46.034183 2677 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972"} err="failed to get container status \"58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972\": rpc error: code = NotFound desc = an error occurred when try to find container \"58496c6932182f9aad17a2f097d3d93b1592c1085ac11279340f50c778b99972\": not found" Jul 14 21:22:46.034278 kubelet[2677]: I0714 21:22:46.034195 2677 scope.go:117] "RemoveContainer" containerID="bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a" Jul 14 21:22:46.034377 containerd[1514]: time="2025-07-14T21:22:46.034332784Z" level=error msg="ContainerStatus for \"bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a\": not found" Jul 14 21:22:46.034548 kubelet[2677]: E0714 21:22:46.034509 2677 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a\": not found" containerID="bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a" Jul 14 21:22:46.034596 kubelet[2677]: I0714 21:22:46.034568 2677 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a"} err="failed to get container status \"bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb630bf170c601c7c9bb0d1f8a23c7cb198eca4d7006e6d00da2df443be16d4a\": not found" Jul 14 21:22:46.034621 kubelet[2677]: I0714 21:22:46.034610 2677 scope.go:117] "RemoveContainer" containerID="95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead" Jul 14 21:22:46.034913 containerd[1514]: time="2025-07-14T21:22:46.034859634Z" level=error msg="ContainerStatus for \"95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead\": not found" Jul 14 21:22:46.035119 kubelet[2677]: E0714 21:22:46.035051 2677 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead\": not found" containerID="95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead" Jul 14 21:22:46.035119 kubelet[2677]: I0714 21:22:46.035084 2677 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead"} err="failed to get container status \"95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead\": rpc error: code = NotFound desc = an error occurred when try to find container \"95b1ffb0cc038c4350c1b06ea9b8395de54ed7c0cad0b4167ee8561e961a0ead\": not found" Jul 14 21:22:46.035119 kubelet[2677]: I0714 21:22:46.035109 2677 scope.go:117] "RemoveContainer" containerID="74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99" Jul 14 21:22:46.036214 containerd[1514]: time="2025-07-14T21:22:46.036179580Z" level=info msg="RemoveContainer for \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\"" Jul 14 21:22:46.040277 containerd[1514]: time="2025-07-14T21:22:46.040239587Z" level=info msg="RemoveContainer for \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\" returns successfully" Jul 14 21:22:46.040463 kubelet[2677]: I0714 21:22:46.040435 2677 scope.go:117] "RemoveContainer" containerID="74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99" Jul 14 21:22:46.040744 containerd[1514]: time="2025-07-14T21:22:46.040668492Z" level=error msg="ContainerStatus for \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\": not found" Jul 14 21:22:46.040882 kubelet[2677]: E0714 21:22:46.040851 2677 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\": not found" containerID="74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99" Jul 14 21:22:46.040882 kubelet[2677]: I0714 21:22:46.040878 2677 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99"} err="failed to get container status \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\": rpc error: code = NotFound desc = an error occurred when try to find container \"74c51676ae35757c5999d62237b12f563dd281fae56484c26072610db3e0bf99\": not found" Jul 14 21:22:46.051212 kubelet[2677]: I0714 21:22:46.051174 2677 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/746e776b-60c4-4008-ad9e-ba1cfe05381d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:46.051212 kubelet[2677]: I0714 21:22:46.051198 2677 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:46.051212 kubelet[2677]: I0714 21:22:46.051207 2677 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:46.051212 kubelet[2677]: I0714 21:22:46.051216 2677 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:46.051212 kubelet[2677]: I0714 21:22:46.051226 2677 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:46.051478 kubelet[2677]: I0714 21:22:46.051235 2677 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:46.051478 kubelet[2677]: I0714 21:22:46.051245 2677 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:46.051478 kubelet[2677]: I0714 21:22:46.051253 2677 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8v8f\" (UniqueName: \"kubernetes.io/projected/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-kube-api-access-z8v8f\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:46.051478 kubelet[2677]: I0714 21:22:46.051262 2677 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:46.051478 kubelet[2677]: I0714 21:22:46.051270 2677 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7nql\" (UniqueName: \"kubernetes.io/projected/746e776b-60c4-4008-ad9e-ba1cfe05381d-kube-api-access-g7nql\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:46.051478 kubelet[2677]: I0714 21:22:46.051279 2677 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 21:22:46.136023 kubelet[2677]: I0714 21:22:46.135953 2677 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" path="/var/lib/kubelet/pods/6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5/volumes" Jul 14 21:22:46.137132 kubelet[2677]: I0714 21:22:46.137091 2677 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="746e776b-60c4-4008-ad9e-ba1cfe05381d" path="/var/lib/kubelet/pods/746e776b-60c4-4008-ad9e-ba1cfe05381d/volumes" Jul 14 21:22:46.610068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddd1de317c5a12d153f2d810fd484226d2d7d22692ed33e6447a4b93bcbee2a0-rootfs.mount: Deactivated successfully. Jul 14 21:22:46.610217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a34ab8af695ab43519672badf29d3f4db8d9d11ab5526c243ebe261c5005bfa4-rootfs.mount: Deactivated successfully. Jul 14 21:22:46.610329 systemd[1]: var-lib-kubelet-pods-746e776b\x2d60c4\x2d4008\x2dad9e\x2dba1cfe05381d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg7nql.mount: Deactivated successfully. Jul 14 21:22:46.610440 systemd[1]: var-lib-kubelet-pods-6eb46e6c\x2da908\x2d4286\x2d9cd2\x2db7f9d9c52ed5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz8v8f.mount: Deactivated successfully. Jul 14 21:22:46.610569 systemd[1]: var-lib-kubelet-pods-6eb46e6c\x2da908\x2d4286\x2d9cd2\x2db7f9d9c52ed5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 21:22:46.610685 systemd[1]: var-lib-kubelet-pods-6eb46e6c\x2da908\x2d4286\x2d9cd2\x2db7f9d9c52ed5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 21:22:47.180395 kubelet[2677]: E0714 21:22:47.180340 2677 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 21:22:47.562958 sshd[4354]: Connection closed by 10.0.0.1 port 38502 Jul 14 21:22:47.563447 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:47.576113 systemd[1]: sshd@26-10.0.0.72:22-10.0.0.1:38502.service: Deactivated successfully. Jul 14 21:22:47.578181 systemd[1]: session-27.scope: Deactivated successfully. Jul 14 21:22:47.579976 systemd-logind[1501]: Session 27 logged out. Waiting for processes to exit. Jul 14 21:22:47.585298 systemd[1]: Started sshd@27-10.0.0.72:22-10.0.0.1:38508.service - OpenSSH per-connection server daemon (10.0.0.1:38508). Jul 14 21:22:47.587881 systemd-logind[1501]: Removed session 27. Jul 14 21:22:47.624468 sshd[4514]: Accepted publickey for core from 10.0.0.1 port 38508 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:47.626080 sshd-session[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:47.630928 systemd-logind[1501]: New session 28 of user core. Jul 14 21:22:47.637094 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 14 21:22:48.824107 sshd[4518]: Connection closed by 10.0.0.1 port 38508 Jul 14 21:22:48.824629 sshd-session[4514]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:48.839635 systemd[1]: sshd@27-10.0.0.72:22-10.0.0.1:38508.service: Deactivated successfully. Jul 14 21:22:48.844483 systemd[1]: session-28.scope: Deactivated successfully. Jul 14 21:22:48.848777 kubelet[2677]: E0714 21:22:48.848715 2677 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="746e776b-60c4-4008-ad9e-ba1cfe05381d" containerName="cilium-operator" Jul 14 21:22:48.848777 kubelet[2677]: E0714 21:22:48.848763 2677 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" containerName="cilium-agent" Jul 14 21:22:48.848777 kubelet[2677]: E0714 21:22:48.848774 2677 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" containerName="mount-cgroup" Jul 14 21:22:48.848777 kubelet[2677]: E0714 21:22:48.848782 2677 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" containerName="apply-sysctl-overwrites" Jul 14 21:22:48.848777 kubelet[2677]: E0714 21:22:48.848790 2677 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" containerName="mount-bpf-fs" Jul 14 21:22:48.852256 kubelet[2677]: E0714 21:22:48.848801 2677 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" containerName="clean-cilium-state" Jul 14 21:22:48.852256 kubelet[2677]: I0714 21:22:48.848840 2677 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb46e6c-a908-4286-9cd2-b7f9d9c52ed5" containerName="cilium-agent" Jul 14 21:22:48.852256 kubelet[2677]: I0714 21:22:48.848850 2677 memory_manager.go:354] "RemoveStaleState removing state" podUID="746e776b-60c4-4008-ad9e-ba1cfe05381d" containerName="cilium-operator" Jul 14 21:22:48.849001 systemd-logind[1501]: Session 28 logged out. Waiting for processes to exit. Jul 14 21:22:48.857454 systemd[1]: Started sshd@28-10.0.0.72:22-10.0.0.1:38516.service - OpenSSH per-connection server daemon (10.0.0.1:38516). Jul 14 21:22:48.860796 systemd-logind[1501]: Removed session 28. Jul 14 21:22:48.874273 kubelet[2677]: I0714 21:22:48.874027 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09f95de5-3e12-431b-a6af-0b833105b5ff-cni-path\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.874273 kubelet[2677]: I0714 21:22:48.874252 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09f95de5-3e12-431b-a6af-0b833105b5ff-cilium-config-path\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.874489 kubelet[2677]: I0714 21:22:48.874280 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09f95de5-3e12-431b-a6af-0b833105b5ff-hostproc\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.874489 kubelet[2677]: I0714 21:22:48.874403 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09f95de5-3e12-431b-a6af-0b833105b5ff-cilium-cgroup\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.874630 kubelet[2677]: I0714 21:22:48.874426 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09f95de5-3e12-431b-a6af-0b833105b5ff-etc-cni-netd\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.874630 kubelet[2677]: I0714 21:22:48.874515 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09f95de5-3e12-431b-a6af-0b833105b5ff-cilium-run\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.874630 kubelet[2677]: I0714 21:22:48.874608 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09f95de5-3e12-431b-a6af-0b833105b5ff-cilium-ipsec-secrets\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.874630 kubelet[2677]: I0714 21:22:48.874626 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09f95de5-3e12-431b-a6af-0b833105b5ff-host-proc-sys-kernel\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.874755 kubelet[2677]: I0714 21:22:48.874714 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzh5w\" (UniqueName: \"kubernetes.io/projected/09f95de5-3e12-431b-a6af-0b833105b5ff-kube-api-access-nzh5w\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.874755 kubelet[2677]: I0714 21:22:48.874729 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09f95de5-3e12-431b-a6af-0b833105b5ff-lib-modules\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.874834 kubelet[2677]: I0714 21:22:48.874809 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09f95de5-3e12-431b-a6af-0b833105b5ff-hubble-tls\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.874834 kubelet[2677]: I0714 21:22:48.874830 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09f95de5-3e12-431b-a6af-0b833105b5ff-host-proc-sys-net\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.885075 kubelet[2677]: I0714 21:22:48.874848 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09f95de5-3e12-431b-a6af-0b833105b5ff-bpf-maps\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.885345 kubelet[2677]: I0714 21:22:48.885324 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09f95de5-3e12-431b-a6af-0b833105b5ff-xtables-lock\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.885427 kubelet[2677]: I0714 21:22:48.885414 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09f95de5-3e12-431b-a6af-0b833105b5ff-clustermesh-secrets\") pod \"cilium-pzpgp\" (UID: \"09f95de5-3e12-431b-a6af-0b833105b5ff\") " pod="kube-system/cilium-pzpgp" Jul 14 21:22:48.887669 systemd[1]: Created slice kubepods-burstable-pod09f95de5_3e12_431b_a6af_0b833105b5ff.slice - libcontainer container kubepods-burstable-pod09f95de5_3e12_431b_a6af_0b833105b5ff.slice. Jul 14 21:22:48.903464 sshd[4529]: Accepted publickey for core from 10.0.0.1 port 38516 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:48.905656 sshd-session[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:48.912816 systemd-logind[1501]: New session 29 of user core. Jul 14 21:22:48.922175 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 14 21:22:48.977011 sshd[4534]: Connection closed by 10.0.0.1 port 38516 Jul 14 21:22:48.977429 sshd-session[4529]: pam_unix(sshd:session): session closed for user core Jul 14 21:22:49.001783 systemd[1]: sshd@28-10.0.0.72:22-10.0.0.1:38516.service: Deactivated successfully. Jul 14 21:22:49.004220 systemd[1]: session-29.scope: Deactivated successfully. Jul 14 21:22:49.006710 systemd-logind[1501]: Session 29 logged out. Waiting for processes to exit. Jul 14 21:22:49.021361 systemd[1]: Started sshd@29-10.0.0.72:22-10.0.0.1:36072.service - OpenSSH per-connection server daemon (10.0.0.1:36072). Jul 14 21:22:49.022548 systemd-logind[1501]: Removed session 29. Jul 14 21:22:49.057306 sshd[4543]: Accepted publickey for core from 10.0.0.1 port 36072 ssh2: RSA SHA256:Fq2MtQixdJ22KaQD5NXWxOJMcPvqv20OsohL/33HEdc Jul 14 21:22:49.059240 sshd-session[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:22:49.064943 systemd-logind[1501]: New session 30 of user core. Jul 14 21:22:49.079168 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 14 21:22:49.194741 kubelet[2677]: E0714 21:22:49.193824 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:49.195716 containerd[1514]: time="2025-07-14T21:22:49.195282793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pzpgp,Uid:09f95de5-3e12-431b-a6af-0b833105b5ff,Namespace:kube-system,Attempt:0,}" Jul 14 21:22:49.437514 containerd[1514]: time="2025-07-14T21:22:49.437191926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:22:49.437514 containerd[1514]: time="2025-07-14T21:22:49.437275113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:22:49.437514 containerd[1514]: time="2025-07-14T21:22:49.437291385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:22:49.437712 containerd[1514]: time="2025-07-14T21:22:49.437415489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:22:49.467318 systemd[1]: Started cri-containerd-b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506.scope - libcontainer container b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506. Jul 14 21:22:49.500876 containerd[1514]: time="2025-07-14T21:22:49.500806681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pzpgp,Uid:09f95de5-3e12-431b-a6af-0b833105b5ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506\"" Jul 14 21:22:49.501806 kubelet[2677]: E0714 21:22:49.501773 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:49.504616 containerd[1514]: time="2025-07-14T21:22:49.504564639Z" level=info msg="CreateContainer within sandbox \"b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:22:49.535930 containerd[1514]: time="2025-07-14T21:22:49.533444924Z" level=info msg="CreateContainer within sandbox \"b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"90e8536e0531eb654fc4e4d9ba30d80c057fed2caf514581e69bc85edc901bc6\"" Jul 14 21:22:49.535930 containerd[1514]: time="2025-07-14T21:22:49.535176378Z" level=info msg="StartContainer for \"90e8536e0531eb654fc4e4d9ba30d80c057fed2caf514581e69bc85edc901bc6\"" Jul 14 21:22:49.581174 systemd[1]: Started cri-containerd-90e8536e0531eb654fc4e4d9ba30d80c057fed2caf514581e69bc85edc901bc6.scope - libcontainer container 90e8536e0531eb654fc4e4d9ba30d80c057fed2caf514581e69bc85edc901bc6. Jul 14 21:22:49.620685 containerd[1514]: time="2025-07-14T21:22:49.620627055Z" level=info msg="StartContainer for \"90e8536e0531eb654fc4e4d9ba30d80c057fed2caf514581e69bc85edc901bc6\" returns successfully" Jul 14 21:22:49.627013 systemd[1]: cri-containerd-90e8536e0531eb654fc4e4d9ba30d80c057fed2caf514581e69bc85edc901bc6.scope: Deactivated successfully. Jul 14 21:22:49.661842 containerd[1514]: time="2025-07-14T21:22:49.661755404Z" level=info msg="shim disconnected" id=90e8536e0531eb654fc4e4d9ba30d80c057fed2caf514581e69bc85edc901bc6 namespace=k8s.io Jul 14 21:22:49.661842 containerd[1514]: time="2025-07-14T21:22:49.661824435Z" level=warning msg="cleaning up after shim disconnected" id=90e8536e0531eb654fc4e4d9ba30d80c057fed2caf514581e69bc85edc901bc6 namespace=k8s.io Jul 14 21:22:49.661842 containerd[1514]: time="2025-07-14T21:22:49.661833461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:22:50.000292 kubelet[2677]: E0714 21:22:50.000257 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:50.002348 containerd[1514]: time="2025-07-14T21:22:50.002304111Z" level=info msg="CreateContainer within sandbox \"b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 21:22:50.017447 containerd[1514]: time="2025-07-14T21:22:50.017385533Z" level=info msg="CreateContainer within sandbox \"b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"46921a903d6c0a47af826b4afb922ab94dce96ae91017eeb199279509036b3bf\"" Jul 14 21:22:50.018136 containerd[1514]: time="2025-07-14T21:22:50.018078026Z" level=info msg="StartContainer for \"46921a903d6c0a47af826b4afb922ab94dce96ae91017eeb199279509036b3bf\"" Jul 14 21:22:50.054056 systemd[1]: Started cri-containerd-46921a903d6c0a47af826b4afb922ab94dce96ae91017eeb199279509036b3bf.scope - libcontainer container 46921a903d6c0a47af826b4afb922ab94dce96ae91017eeb199279509036b3bf. Jul 14 21:22:50.085998 containerd[1514]: time="2025-07-14T21:22:50.085938434Z" level=info msg="StartContainer for \"46921a903d6c0a47af826b4afb922ab94dce96ae91017eeb199279509036b3bf\" returns successfully" Jul 14 21:22:50.093988 systemd[1]: cri-containerd-46921a903d6c0a47af826b4afb922ab94dce96ae91017eeb199279509036b3bf.scope: Deactivated successfully. Jul 14 21:22:50.133022 containerd[1514]: time="2025-07-14T21:22:50.132909896Z" level=info msg="shim disconnected" id=46921a903d6c0a47af826b4afb922ab94dce96ae91017eeb199279509036b3bf namespace=k8s.io Jul 14 21:22:50.133022 containerd[1514]: time="2025-07-14T21:22:50.133017049Z" level=warning msg="cleaning up after shim disconnected" id=46921a903d6c0a47af826b4afb922ab94dce96ae91017eeb199279509036b3bf namespace=k8s.io Jul 14 21:22:50.133022 containerd[1514]: time="2025-07-14T21:22:50.133027289Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:22:50.994515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46921a903d6c0a47af826b4afb922ab94dce96ae91017eeb199279509036b3bf-rootfs.mount: Deactivated successfully. Jul 14 21:22:51.005380 kubelet[2677]: E0714 21:22:51.005345 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:51.008249 containerd[1514]: time="2025-07-14T21:22:51.008142430Z" level=info msg="CreateContainer within sandbox \"b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 21:22:51.038391 containerd[1514]: time="2025-07-14T21:22:51.038323015Z" level=info msg="CreateContainer within sandbox \"b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ba9f43a6ff0708cad852aa3481b1ed2b13e10838ec11647138168deca9db2963\"" Jul 14 21:22:51.039200 containerd[1514]: time="2025-07-14T21:22:51.039161936Z" level=info msg="StartContainer for \"ba9f43a6ff0708cad852aa3481b1ed2b13e10838ec11647138168deca9db2963\"" Jul 14 21:22:51.079133 systemd[1]: Started cri-containerd-ba9f43a6ff0708cad852aa3481b1ed2b13e10838ec11647138168deca9db2963.scope - libcontainer container ba9f43a6ff0708cad852aa3481b1ed2b13e10838ec11647138168deca9db2963. Jul 14 21:22:51.117690 containerd[1514]: time="2025-07-14T21:22:51.117639876Z" level=info msg="StartContainer for \"ba9f43a6ff0708cad852aa3481b1ed2b13e10838ec11647138168deca9db2963\" returns successfully" Jul 14 21:22:51.119980 systemd[1]: cri-containerd-ba9f43a6ff0708cad852aa3481b1ed2b13e10838ec11647138168deca9db2963.scope: Deactivated successfully. Jul 14 21:22:51.132707 kubelet[2677]: E0714 21:22:51.132664 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:51.149645 containerd[1514]: time="2025-07-14T21:22:51.149559298Z" level=info msg="shim disconnected" id=ba9f43a6ff0708cad852aa3481b1ed2b13e10838ec11647138168deca9db2963 namespace=k8s.io Jul 14 21:22:51.149645 containerd[1514]: time="2025-07-14T21:22:51.149629611Z" level=warning msg="cleaning up after shim disconnected" id=ba9f43a6ff0708cad852aa3481b1ed2b13e10838ec11647138168deca9db2963 namespace=k8s.io Jul 14 21:22:51.149645 containerd[1514]: time="2025-07-14T21:22:51.149641213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:22:51.994383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba9f43a6ff0708cad852aa3481b1ed2b13e10838ec11647138168deca9db2963-rootfs.mount: Deactivated successfully. Jul 14 21:22:52.008538 kubelet[2677]: E0714 21:22:52.008512 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:52.010082 containerd[1514]: time="2025-07-14T21:22:52.010040250Z" level=info msg="CreateContainer within sandbox \"b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 21:22:52.181170 kubelet[2677]: E0714 21:22:52.181115 2677 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 21:22:52.255027 containerd[1514]: time="2025-07-14T21:22:52.254840907Z" level=info msg="CreateContainer within sandbox \"b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0ba94c6e9ab9c74bdd1681dc5f727e94eb447c1702586c28dc3a1a2046fcbb5b\"" Jul 14 21:22:52.255823 containerd[1514]: time="2025-07-14T21:22:52.255709563Z" level=info msg="StartContainer for \"0ba94c6e9ab9c74bdd1681dc5f727e94eb447c1702586c28dc3a1a2046fcbb5b\"" Jul 14 21:22:52.298233 systemd[1]: Started cri-containerd-0ba94c6e9ab9c74bdd1681dc5f727e94eb447c1702586c28dc3a1a2046fcbb5b.scope - libcontainer container 0ba94c6e9ab9c74bdd1681dc5f727e94eb447c1702586c28dc3a1a2046fcbb5b. Jul 14 21:22:52.326872 systemd[1]: cri-containerd-0ba94c6e9ab9c74bdd1681dc5f727e94eb447c1702586c28dc3a1a2046fcbb5b.scope: Deactivated successfully. Jul 14 21:22:52.330736 containerd[1514]: time="2025-07-14T21:22:52.330693867Z" level=info msg="StartContainer for \"0ba94c6e9ab9c74bdd1681dc5f727e94eb447c1702586c28dc3a1a2046fcbb5b\" returns successfully" Jul 14 21:22:52.355783 containerd[1514]: time="2025-07-14T21:22:52.355671026Z" level=info msg="shim disconnected" id=0ba94c6e9ab9c74bdd1681dc5f727e94eb447c1702586c28dc3a1a2046fcbb5b namespace=k8s.io Jul 14 21:22:52.355783 containerd[1514]: time="2025-07-14T21:22:52.355736479Z" level=warning msg="cleaning up after shim disconnected" id=0ba94c6e9ab9c74bdd1681dc5f727e94eb447c1702586c28dc3a1a2046fcbb5b namespace=k8s.io Jul 14 21:22:52.355783 containerd[1514]: time="2025-07-14T21:22:52.355747330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:22:52.995345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ba94c6e9ab9c74bdd1681dc5f727e94eb447c1702586c28dc3a1a2046fcbb5b-rootfs.mount: Deactivated successfully. Jul 14 21:22:53.013918 kubelet[2677]: E0714 21:22:53.013850 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:53.016187 containerd[1514]: time="2025-07-14T21:22:53.016104652Z" level=info msg="CreateContainer within sandbox \"b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 21:22:53.039034 containerd[1514]: time="2025-07-14T21:22:53.038953716Z" level=info msg="CreateContainer within sandbox \"b8db8b7a63bae0a8e09cd38f241cb8dfd96f478bcff5d593dacbe02a5d912506\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9bf1dd52bc6eb5f7030a0d52ef07006afb701b61aeea231cdddcce2c3701fafb\"" Jul 14 21:22:53.039633 containerd[1514]: time="2025-07-14T21:22:53.039592788Z" level=info msg="StartContainer for \"9bf1dd52bc6eb5f7030a0d52ef07006afb701b61aeea231cdddcce2c3701fafb\"" Jul 14 21:22:53.075076 systemd[1]: Started cri-containerd-9bf1dd52bc6eb5f7030a0d52ef07006afb701b61aeea231cdddcce2c3701fafb.scope - libcontainer container 9bf1dd52bc6eb5f7030a0d52ef07006afb701b61aeea231cdddcce2c3701fafb. Jul 14 21:22:53.111501 containerd[1514]: time="2025-07-14T21:22:53.111413235Z" level=info msg="StartContainer for \"9bf1dd52bc6eb5f7030a0d52ef07006afb701b61aeea231cdddcce2c3701fafb\" returns successfully" Jul 14 21:22:53.576927 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 14 21:22:54.018069 kubelet[2677]: E0714 21:22:54.017941 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:54.325050 kubelet[2677]: I0714 21:22:54.324858 2677 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-14T21:22:54Z","lastTransitionTime":"2025-07-14T21:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 14 21:22:55.195707 kubelet[2677]: E0714 21:22:55.195646 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:56.976750 systemd-networkd[1443]: lxc_health: Link UP Jul 14 21:22:56.991152 systemd-networkd[1443]: lxc_health: Gained carrier Jul 14 21:22:57.195769 kubelet[2677]: E0714 21:22:57.195529 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:57.271924 kubelet[2677]: I0714 21:22:57.270320 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pzpgp" podStartSLOduration=9.269920035 podStartE2EDuration="9.269920035s" podCreationTimestamp="2025-07-14 21:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:22:54.261266943 +0000 UTC m=+92.220323484" watchObservedRunningTime="2025-07-14 21:22:57.269920035 +0000 UTC m=+95.228976556" Jul 14 21:22:57.869172 kubelet[2677]: E0714 21:22:57.869085 2677 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:33830->127.0.0.1:42831: write tcp 127.0.0.1:33830->127.0.0.1:42831: write: broken pipe Jul 14 21:22:58.026725 kubelet[2677]: E0714 21:22:58.026622 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:58.132778 kubelet[2677]: E0714 21:22:58.132576 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:58.230011 systemd-networkd[1443]: lxc_health: Gained IPv6LL Jul 14 21:22:59.029313 kubelet[2677]: E0714 21:22:59.029250 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:22:59.132609 kubelet[2677]: E0714 21:22:59.132522 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:23:02.091165 sshd[4546]: Connection closed by 10.0.0.1 port 36072 Jul 14 21:23:02.091837 sshd-session[4543]: pam_unix(sshd:session): session closed for user core Jul 14 21:23:02.096643 systemd[1]: sshd@29-10.0.0.72:22-10.0.0.1:36072.service: Deactivated successfully. Jul 14 21:23:02.099027 systemd[1]: session-30.scope: Deactivated successfully. Jul 14 21:23:02.099719 systemd-logind[1501]: Session 30 logged out. Waiting for processes to exit. Jul 14 21:23:02.100750 systemd-logind[1501]: Removed session 30.