Sep 13 10:14:07.822485 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat Sep 13 08:30:13 -00 2025 Sep 13 10:14:07.822513 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:14:07.822542 kernel: BIOS-provided physical RAM map: Sep 13 10:14:07.822549 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 10:14:07.822556 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 10:14:07.822562 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 10:14:07.822573 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 10:14:07.822580 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 10:14:07.822590 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 13 10:14:07.822599 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 13 10:14:07.822606 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 13 10:14:07.822612 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 13 10:14:07.822619 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 13 10:14:07.822626 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 13 10:14:07.822634 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 13 10:14:07.822643 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 10:14:07.822657 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 13 10:14:07.822665 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 13 10:14:07.822680 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 13 10:14:07.822691 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 13 10:14:07.822704 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 13 10:14:07.822711 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 10:14:07.822718 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 10:14:07.822733 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 10:14:07.822748 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 13 10:14:07.822766 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 10:14:07.822774 kernel: NX (Execute Disable) protection: active Sep 13 10:14:07.822787 kernel: APIC: Static calls initialized Sep 13 10:14:07.822794 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 13 10:14:07.822801 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 13 10:14:07.822808 kernel: extended physical RAM map: Sep 13 10:14:07.822820 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 10:14:07.822830 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 10:14:07.822837 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 10:14:07.822844 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 10:14:07.822851 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 10:14:07.822861 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 13 10:14:07.822868 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 13 10:14:07.822875 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 13 10:14:07.822882 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 13 10:14:07.822893 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 13 10:14:07.822900 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 13 10:14:07.822909 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 13 10:14:07.822916 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 13 10:14:07.822923 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 13 10:14:07.822931 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 13 10:14:07.822938 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 13 10:14:07.822945 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 10:14:07.822953 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 13 10:14:07.822965 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 13 10:14:07.822973 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 13 10:14:07.822988 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 13 10:14:07.823012 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 13 10:14:07.823021 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 10:14:07.823028 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 10:14:07.823035 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 10:14:07.823053 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 13 10:14:07.823066 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 10:14:07.823082 kernel: efi: EFI v2.7 by EDK II Sep 13 10:14:07.823095 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 13 10:14:07.823105 kernel: random: crng init done Sep 13 10:14:07.823115 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 13 10:14:07.823122 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 13 10:14:07.823135 kernel: secureboot: Secure boot disabled Sep 13 10:14:07.823143 kernel: SMBIOS 2.8 present. Sep 13 10:14:07.823150 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 13 10:14:07.823157 kernel: DMI: Memory slots populated: 1/1 Sep 13 10:14:07.823164 kernel: Hypervisor detected: KVM Sep 13 10:14:07.823172 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 10:14:07.823179 kernel: kvm-clock: using sched offset of 5351401314 cycles Sep 13 10:14:07.823195 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 10:14:07.823211 kernel: tsc: Detected 2794.748 MHz processor Sep 13 10:14:07.823227 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 10:14:07.823322 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 10:14:07.823346 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 13 10:14:07.823354 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 10:14:07.823368 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 10:14:07.823375 kernel: Using GB pages for direct mapping Sep 13 10:14:07.823386 kernel: ACPI: Early table checksum verification disabled Sep 13 10:14:07.823403 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 13 10:14:07.823416 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 13 10:14:07.823424 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:14:07.823432 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:14:07.823442 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 13 10:14:07.823453 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:14:07.823467 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:14:07.823475 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:14:07.823493 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:14:07.823506 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 10:14:07.823527 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 13 10:14:07.823535 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 13 10:14:07.823547 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 13 10:14:07.823563 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 13 10:14:07.823574 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 13 10:14:07.823594 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 13 10:14:07.823602 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 13 10:14:07.823610 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 13 10:14:07.823617 kernel: No NUMA configuration found Sep 13 10:14:07.823625 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 13 10:14:07.823635 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 13 10:14:07.823642 kernel: Zone ranges: Sep 13 10:14:07.823653 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 10:14:07.823660 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 13 10:14:07.823668 kernel: Normal empty Sep 13 10:14:07.823675 kernel: Device empty Sep 13 10:14:07.823682 kernel: Movable zone start for each node Sep 13 10:14:07.823690 kernel: Early memory node ranges Sep 13 10:14:07.823698 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 10:14:07.823705 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 13 10:14:07.823715 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 13 10:14:07.823725 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 13 10:14:07.823732 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 13 10:14:07.823740 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 13 10:14:07.823747 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 13 10:14:07.823755 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 13 10:14:07.823762 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 13 10:14:07.823770 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 10:14:07.823780 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 10:14:07.823796 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 13 10:14:07.823804 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 10:14:07.823811 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 13 10:14:07.823819 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 13 10:14:07.823827 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 13 10:14:07.823837 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 13 10:14:07.823845 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 13 10:14:07.823853 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 10:14:07.823861 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 10:14:07.823869 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 10:14:07.823878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 10:14:07.823886 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 10:14:07.823894 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 10:14:07.823902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 10:14:07.823910 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 10:14:07.823918 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 10:14:07.823925 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 10:14:07.823933 kernel: TSC deadline timer available Sep 13 10:14:07.823943 kernel: CPU topo: Max. logical packages: 1 Sep 13 10:14:07.823951 kernel: CPU topo: Max. logical dies: 1 Sep 13 10:14:07.823959 kernel: CPU topo: Max. dies per package: 1 Sep 13 10:14:07.823966 kernel: CPU topo: Max. threads per core: 1 Sep 13 10:14:07.823974 kernel: CPU topo: Num. cores per package: 4 Sep 13 10:14:07.823982 kernel: CPU topo: Num. threads per package: 4 Sep 13 10:14:07.823990 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 13 10:14:07.823998 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 10:14:07.824005 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 10:14:07.824029 kernel: kvm-guest: setup PV sched yield Sep 13 10:14:07.824044 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 13 10:14:07.824054 kernel: Booting paravirtualized kernel on KVM Sep 13 10:14:07.824062 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 10:14:07.824070 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 10:14:07.824078 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 13 10:14:07.824094 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 13 10:14:07.824102 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 10:14:07.824110 kernel: kvm-guest: PV spinlocks enabled Sep 13 10:14:07.824118 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 10:14:07.824130 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:14:07.824141 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 10:14:07.824152 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 10:14:07.824166 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 10:14:07.824174 kernel: Fallback order for Node 0: 0 Sep 13 10:14:07.824190 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 13 10:14:07.824199 kernel: Policy zone: DMA32 Sep 13 10:14:07.824207 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 10:14:07.824218 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 10:14:07.824225 kernel: ftrace: allocating 40125 entries in 157 pages Sep 13 10:14:07.824233 kernel: ftrace: allocated 157 pages with 5 groups Sep 13 10:14:07.824255 kernel: Dynamic Preempt: voluntary Sep 13 10:14:07.824263 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 10:14:07.824271 kernel: rcu: RCU event tracing is enabled. Sep 13 10:14:07.824284 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 10:14:07.824301 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 10:14:07.824310 kernel: Rude variant of Tasks RCU enabled. Sep 13 10:14:07.824322 kernel: Tracing variant of Tasks RCU enabled. Sep 13 10:14:07.824340 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 10:14:07.824351 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 10:14:07.824359 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:14:07.824367 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:14:07.824375 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:14:07.824383 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 10:14:07.824391 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 10:14:07.824399 kernel: Console: colour dummy device 80x25 Sep 13 10:14:07.824410 kernel: printk: legacy console [ttyS0] enabled Sep 13 10:14:07.824418 kernel: ACPI: Core revision 20240827 Sep 13 10:14:07.824426 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 10:14:07.824434 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 10:14:07.824442 kernel: x2apic enabled Sep 13 10:14:07.824450 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 10:14:07.824458 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 10:14:07.824466 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 10:14:07.824474 kernel: kvm-guest: setup PV IPIs Sep 13 10:14:07.824484 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 10:14:07.824492 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 10:14:07.824500 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 10:14:07.824508 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 10:14:07.824515 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 10:14:07.824531 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 10:14:07.824539 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 10:14:07.824547 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 10:14:07.824555 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 10:14:07.824565 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 10:14:07.824573 kernel: active return thunk: retbleed_return_thunk Sep 13 10:14:07.824581 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 10:14:07.824592 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 10:14:07.824605 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 10:14:07.824614 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 10:14:07.824630 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 10:14:07.824646 kernel: active return thunk: srso_return_thunk Sep 13 10:14:07.824669 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 10:14:07.824685 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 10:14:07.824693 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 10:14:07.824701 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 10:14:07.824709 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 10:14:07.824716 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 10:14:07.824724 kernel: Freeing SMP alternatives memory: 32K Sep 13 10:14:07.824737 kernel: pid_max: default: 32768 minimum: 301 Sep 13 10:14:07.824744 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 13 10:14:07.824755 kernel: landlock: Up and running. Sep 13 10:14:07.824762 kernel: SELinux: Initializing. Sep 13 10:14:07.824770 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 10:14:07.824778 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 10:14:07.824786 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 10:14:07.824794 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 10:14:07.824802 kernel: ... version: 0 Sep 13 10:14:07.824809 kernel: ... bit width: 48 Sep 13 10:14:07.824822 kernel: ... generic registers: 6 Sep 13 10:14:07.824850 kernel: ... value mask: 0000ffffffffffff Sep 13 10:14:07.824869 kernel: ... max period: 00007fffffffffff Sep 13 10:14:07.824889 kernel: ... fixed-purpose events: 0 Sep 13 10:14:07.824902 kernel: ... event mask: 000000000000003f Sep 13 10:14:07.824910 kernel: signal: max sigframe size: 1776 Sep 13 10:14:07.824918 kernel: rcu: Hierarchical SRCU implementation. Sep 13 10:14:07.824930 kernel: rcu: Max phase no-delay instances is 400. Sep 13 10:14:07.824951 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 13 10:14:07.824960 kernel: smp: Bringing up secondary CPUs ... Sep 13 10:14:07.824980 kernel: smpboot: x86: Booting SMP configuration: Sep 13 10:14:07.824988 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 10:14:07.824998 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 10:14:07.825006 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 10:14:07.825014 kernel: Memory: 2422672K/2565800K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54088K init, 2876K bss, 137196K reserved, 0K cma-reserved) Sep 13 10:14:07.825022 kernel: devtmpfs: initialized Sep 13 10:14:07.825030 kernel: x86/mm: Memory block size: 128MB Sep 13 10:14:07.825038 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 13 10:14:07.825046 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 13 10:14:07.825057 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 13 10:14:07.825065 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 13 10:14:07.825072 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 13 10:14:07.825081 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 13 10:14:07.825089 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 10:14:07.825097 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 10:14:07.825107 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 10:14:07.825115 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 10:14:07.825126 kernel: audit: initializing netlink subsys (disabled) Sep 13 10:14:07.825151 kernel: audit: type=2000 audit(1757758444.993:1): state=initialized audit_enabled=0 res=1 Sep 13 10:14:07.825165 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 10:14:07.825178 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 10:14:07.825195 kernel: cpuidle: using governor menu Sep 13 10:14:07.825217 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 10:14:07.825226 kernel: dca service started, version 1.12.1 Sep 13 10:14:07.825235 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 13 10:14:07.825255 kernel: PCI: Using configuration type 1 for base access Sep 13 10:14:07.825263 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 10:14:07.825275 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 10:14:07.825283 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 10:14:07.825291 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 10:14:07.825299 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 10:14:07.825315 kernel: ACPI: Added _OSI(Module Device) Sep 13 10:14:07.825324 kernel: ACPI: Added _OSI(Processor Device) Sep 13 10:14:07.825342 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 10:14:07.825359 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 10:14:07.825375 kernel: ACPI: Interpreter enabled Sep 13 10:14:07.825404 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 10:14:07.825420 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 10:14:07.825437 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 10:14:07.825448 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 10:14:07.825456 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 10:14:07.825464 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 10:14:07.825834 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 10:14:07.825984 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 10:14:07.826174 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 10:14:07.826186 kernel: PCI host bridge to bus 0000:00 Sep 13 10:14:07.826441 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 10:14:07.826608 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 10:14:07.826723 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 10:14:07.826836 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 13 10:14:07.826965 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 13 10:14:07.827084 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 13 10:14:07.827207 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 10:14:07.827452 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 13 10:14:07.827663 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 13 10:14:07.827843 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 13 10:14:07.827967 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 13 10:14:07.828093 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 13 10:14:07.828213 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 10:14:07.828377 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 13 10:14:07.828512 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 13 10:14:07.828692 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 13 10:14:07.828846 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 13 10:14:07.828992 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 13 10:14:07.829233 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 13 10:14:07.829435 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 13 10:14:07.829592 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 13 10:14:07.829772 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 13 10:14:07.829913 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 13 10:14:07.830107 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 13 10:14:07.830260 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 13 10:14:07.830411 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 13 10:14:07.830569 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 13 10:14:07.830704 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 10:14:07.830994 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 13 10:14:07.831119 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 13 10:14:07.831294 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 13 10:14:07.831445 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 13 10:14:07.831578 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 13 10:14:07.831589 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 10:14:07.831597 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 10:14:07.831605 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 10:14:07.831613 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 10:14:07.831621 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 10:14:07.831629 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 10:14:07.831641 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 10:14:07.831649 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 10:14:07.831657 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 10:14:07.831665 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 10:14:07.831673 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 10:14:07.831680 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 10:14:07.831688 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 10:14:07.831696 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 10:14:07.831704 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 10:14:07.831715 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 10:14:07.831723 kernel: iommu: Default domain type: Translated Sep 13 10:14:07.831737 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 10:14:07.831745 kernel: efivars: Registered efivars operations Sep 13 10:14:07.831753 kernel: PCI: Using ACPI for IRQ routing Sep 13 10:14:07.831761 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 10:14:07.831769 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 13 10:14:07.831777 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 13 10:14:07.831785 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 13 10:14:07.831795 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 13 10:14:07.831803 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 13 10:14:07.831811 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 13 10:14:07.831819 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 13 10:14:07.831827 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 13 10:14:07.831969 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 10:14:07.832109 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 10:14:07.832286 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 10:14:07.832307 kernel: vgaarb: loaded Sep 13 10:14:07.832315 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 10:14:07.832323 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 10:14:07.832331 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 10:14:07.832339 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 10:14:07.832347 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 10:14:07.832355 kernel: pnp: PnP ACPI init Sep 13 10:14:07.832586 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 13 10:14:07.832610 kernel: pnp: PnP ACPI: found 6 devices Sep 13 10:14:07.832619 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 10:14:07.832627 kernel: NET: Registered PF_INET protocol family Sep 13 10:14:07.832635 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 10:14:07.832644 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 10:14:07.832652 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 10:14:07.832660 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 10:14:07.832668 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 10:14:07.832677 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 10:14:07.832687 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 10:14:07.832695 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 10:14:07.832703 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 10:14:07.832711 kernel: NET: Registered PF_XDP protocol family Sep 13 10:14:07.832872 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 13 10:14:07.833046 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 13 10:14:07.833277 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 10:14:07.833446 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 10:14:07.833574 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 10:14:07.833686 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 13 10:14:07.833848 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 13 10:14:07.834033 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 13 10:14:07.834057 kernel: PCI: CLS 0 bytes, default 64 Sep 13 10:14:07.834080 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 10:14:07.834093 kernel: Initialise system trusted keyrings Sep 13 10:14:07.834116 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 10:14:07.834139 kernel: Key type asymmetric registered Sep 13 10:14:07.834157 kernel: Asymmetric key parser 'x509' registered Sep 13 10:14:07.834168 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 10:14:07.834184 kernel: io scheduler mq-deadline registered Sep 13 10:14:07.834202 kernel: io scheduler kyber registered Sep 13 10:14:07.834213 kernel: io scheduler bfq registered Sep 13 10:14:07.834235 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 10:14:07.834281 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 10:14:07.834293 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 10:14:07.834305 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 10:14:07.834317 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 10:14:07.834329 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 10:14:07.834340 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 10:14:07.834352 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 10:14:07.834364 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 10:14:07.834381 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 10:14:07.834583 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 10:14:07.834746 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 10:14:07.834906 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T10:14:07 UTC (1757758447) Sep 13 10:14:07.835054 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 13 10:14:07.835066 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 10:14:07.835075 kernel: efifb: probing for efifb Sep 13 10:14:07.835083 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 13 10:14:07.835096 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 13 10:14:07.835105 kernel: efifb: scrolling: redraw Sep 13 10:14:07.835113 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 10:14:07.835122 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 10:14:07.835130 kernel: fb0: EFI VGA frame buffer device Sep 13 10:14:07.835139 kernel: pstore: Using crash dump compression: deflate Sep 13 10:14:07.835147 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 10:14:07.835156 kernel: NET: Registered PF_INET6 protocol family Sep 13 10:14:07.835164 kernel: Segment Routing with IPv6 Sep 13 10:14:07.835174 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 10:14:07.835183 kernel: NET: Registered PF_PACKET protocol family Sep 13 10:14:07.835191 kernel: Key type dns_resolver registered Sep 13 10:14:07.835199 kernel: IPI shorthand broadcast: enabled Sep 13 10:14:07.835207 kernel: sched_clock: Marking stable (3870003471, 172356872)->(4098539054, -56178711) Sep 13 10:14:07.835216 kernel: registered taskstats version 1 Sep 13 10:14:07.835228 kernel: Loading compiled-in X.509 certificates Sep 13 10:14:07.835263 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: cbb54677ad1c578839cdade5ab8500bbdb72e350' Sep 13 10:14:07.835271 kernel: Demotion targets for Node 0: null Sep 13 10:14:07.835284 kernel: Key type .fscrypt registered Sep 13 10:14:07.835292 kernel: Key type fscrypt-provisioning registered Sep 13 10:14:07.835300 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 10:14:07.835309 kernel: ima: Allocated hash algorithm: sha1 Sep 13 10:14:07.835317 kernel: ima: No architecture policies found Sep 13 10:14:07.835333 kernel: clk: Disabling unused clocks Sep 13 10:14:07.835342 kernel: Warning: unable to open an initial console. Sep 13 10:14:07.835350 kernel: Freeing unused kernel image (initmem) memory: 54088K Sep 13 10:14:07.835358 kernel: Write protecting the kernel read-only data: 24576k Sep 13 10:14:07.835370 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 13 10:14:07.835378 kernel: Run /init as init process Sep 13 10:14:07.835387 kernel: with arguments: Sep 13 10:14:07.835395 kernel: /init Sep 13 10:14:07.835403 kernel: with environment: Sep 13 10:14:07.835411 kernel: HOME=/ Sep 13 10:14:07.835419 kernel: TERM=linux Sep 13 10:14:07.835428 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 10:14:07.835441 systemd[1]: Successfully made /usr/ read-only. Sep 13 10:14:07.835456 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 10:14:07.835465 systemd[1]: Detected virtualization kvm. Sep 13 10:14:07.835482 systemd[1]: Detected architecture x86-64. Sep 13 10:14:07.835499 systemd[1]: Running in initrd. Sep 13 10:14:07.835528 systemd[1]: No hostname configured, using default hostname. Sep 13 10:14:07.835540 systemd[1]: Hostname set to . Sep 13 10:14:07.835560 systemd[1]: Initializing machine ID from VM UUID. Sep 13 10:14:07.835587 systemd[1]: Queued start job for default target initrd.target. Sep 13 10:14:07.835609 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:14:07.835634 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:14:07.835669 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 10:14:07.835694 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 10:14:07.835726 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 10:14:07.835747 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 10:14:07.835767 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 10:14:07.835780 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 10:14:07.835793 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:14:07.835805 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:14:07.835818 systemd[1]: Reached target paths.target - Path Units. Sep 13 10:14:07.835830 systemd[1]: Reached target slices.target - Slice Units. Sep 13 10:14:07.835842 systemd[1]: Reached target swap.target - Swaps. Sep 13 10:14:07.835854 systemd[1]: Reached target timers.target - Timer Units. Sep 13 10:14:07.835870 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 10:14:07.835880 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 10:14:07.835889 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 10:14:07.835898 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 13 10:14:07.835907 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:14:07.835915 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 10:14:07.835924 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:14:07.835933 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 10:14:07.835942 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 10:14:07.835954 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 10:14:07.835962 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 10:14:07.835972 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 13 10:14:07.835980 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 10:14:07.835989 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 10:14:07.836011 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 10:14:07.836020 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:14:07.836029 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 10:14:07.836042 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:14:07.836051 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 10:14:07.836096 systemd-journald[220]: Collecting audit messages is disabled. Sep 13 10:14:07.836121 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 10:14:07.836130 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:14:07.836139 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 10:14:07.836149 systemd-journald[220]: Journal started Sep 13 10:14:07.836172 systemd-journald[220]: Runtime Journal (/run/log/journal/e740e0cbac3a4543852ed1779dd918e8) is 6M, max 48.4M, 42.4M free. Sep 13 10:14:07.827870 systemd-modules-load[221]: Inserted module 'overlay' Sep 13 10:14:07.837488 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 10:14:07.847543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:14:07.854457 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 10:14:07.857739 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 10:14:07.864288 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 10:14:07.866085 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 13 10:14:07.867076 kernel: Bridge firewalling registered Sep 13 10:14:07.872513 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:14:07.872905 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 10:14:07.874944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:14:07.882640 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 13 10:14:07.892569 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:14:07.893422 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:14:07.896474 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 10:14:07.900771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 10:14:07.903525 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 10:14:07.935001 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:14:07.957800 systemd-resolved[260]: Positive Trust Anchors: Sep 13 10:14:07.957823 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 10:14:07.957853 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 10:14:07.960622 systemd-resolved[260]: Defaulting to hostname 'linux'. Sep 13 10:14:07.962083 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 10:14:07.967380 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:14:08.064275 kernel: SCSI subsystem initialized Sep 13 10:14:08.073277 kernel: Loading iSCSI transport class v2.0-870. Sep 13 10:14:08.084300 kernel: iscsi: registered transport (tcp) Sep 13 10:14:08.106282 kernel: iscsi: registered transport (qla4xxx) Sep 13 10:14:08.106347 kernel: QLogic iSCSI HBA Driver Sep 13 10:14:08.130391 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 10:14:08.149255 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:14:08.153037 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 10:14:08.218969 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 10:14:08.221738 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 10:14:08.293269 kernel: raid6: avx2x4 gen() 30448 MB/s Sep 13 10:14:08.310262 kernel: raid6: avx2x2 gen() 31187 MB/s Sep 13 10:14:08.327294 kernel: raid6: avx2x1 gen() 25918 MB/s Sep 13 10:14:08.327310 kernel: raid6: using algorithm avx2x2 gen() 31187 MB/s Sep 13 10:14:08.345295 kernel: raid6: .... xor() 19946 MB/s, rmw enabled Sep 13 10:14:08.345324 kernel: raid6: using avx2x2 recovery algorithm Sep 13 10:14:08.365267 kernel: xor: automatically using best checksumming function avx Sep 13 10:14:08.529274 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 10:14:08.538181 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 10:14:08.541969 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:14:08.573870 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 13 10:14:08.579921 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:14:08.584664 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 10:14:08.617901 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Sep 13 10:14:08.648430 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 10:14:08.650917 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 10:14:08.743160 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:14:08.748347 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 10:14:08.789264 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 10:14:08.797281 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 13 10:14:08.803291 kernel: AES CTR mode by8 optimization enabled Sep 13 10:14:08.815265 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 10:14:08.817764 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:14:08.819229 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:14:08.822047 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:14:08.824695 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 10:14:08.828466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:14:08.832611 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 10:14:08.832635 kernel: GPT:9289727 != 19775487 Sep 13 10:14:08.832645 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 10:14:08.832656 kernel: GPT:9289727 != 19775487 Sep 13 10:14:08.834054 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 10:14:08.834072 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:14:08.834434 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:14:08.843278 kernel: libata version 3.00 loaded. Sep 13 10:14:08.844209 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:14:08.844364 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:14:08.853416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:14:08.866280 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 10:14:08.890059 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 10:14:08.893573 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 10:14:08.898470 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 10:14:08.915173 disk-uuid[615]: Primary Header is updated. Sep 13 10:14:08.915173 disk-uuid[615]: Secondary Entries is updated. Sep 13 10:14:08.915173 disk-uuid[615]: Secondary Header is updated. Sep 13 10:14:08.942277 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 10:14:08.944617 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 10:14:08.944676 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 13 10:14:08.946547 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 13 10:14:08.946753 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 10:14:08.950434 kernel: scsi host0: ahci Sep 13 10:14:08.951261 kernel: scsi host1: ahci Sep 13 10:14:08.952266 kernel: scsi host2: ahci Sep 13 10:14:08.952823 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 10:14:08.961202 kernel: scsi host3: ahci Sep 13 10:14:08.961392 kernel: scsi host4: ahci Sep 13 10:14:08.961550 kernel: scsi host5: ahci Sep 13 10:14:08.961695 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 13 10:14:08.961712 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 13 10:14:08.961723 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 13 10:14:08.961733 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 13 10:14:08.961744 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 13 10:14:08.961754 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 13 10:14:08.967017 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 10:14:08.967350 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:14:09.268280 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 10:14:09.268334 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 10:14:09.269269 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 10:14:09.269284 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 10:14:09.270901 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 10:14:09.270929 kernel: ata3.00: applying bridge limits Sep 13 10:14:09.271624 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 10:14:09.271643 kernel: ata3.00: configured for UDMA/100 Sep 13 10:14:09.277282 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 10:14:09.277365 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 10:14:09.278281 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 10:14:09.280273 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 10:14:09.338288 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 10:14:09.338876 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 10:14:09.364276 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 10:14:09.795775 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 10:14:09.797679 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 10:14:09.799105 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:14:09.800298 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 10:14:09.803686 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 10:14:09.839229 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 10:14:09.969266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:14:09.969660 disk-uuid[616]: The operation has completed successfully. Sep 13 10:14:10.000449 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 10:14:10.000581 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 10:14:10.037262 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 10:14:10.062164 sh[666]: Success Sep 13 10:14:10.083291 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 10:14:10.083349 kernel: device-mapper: uevent: version 1.0.3 Sep 13 10:14:10.084622 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 13 10:14:10.094270 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 13 10:14:10.122764 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 10:14:10.125994 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 10:14:10.143465 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 10:14:10.151083 kernel: BTRFS: device fsid fbf3e737-db97-4ff7-a1f5-c4d4b7390663 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (678) Sep 13 10:14:10.151111 kernel: BTRFS info (device dm-0): first mount of filesystem fbf3e737-db97-4ff7-a1f5-c4d4b7390663 Sep 13 10:14:10.151123 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:14:10.156528 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 10:14:10.156567 kernel: BTRFS info (device dm-0): enabling free space tree Sep 13 10:14:10.157767 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 10:14:10.158322 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 13 10:14:10.160517 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 10:14:10.161260 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 10:14:10.162981 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 10:14:10.190288 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Sep 13 10:14:10.192638 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:14:10.192668 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:14:10.195858 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:14:10.195887 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:14:10.201270 kernel: BTRFS info (device vda6): last unmount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:14:10.201499 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 10:14:10.202932 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 10:14:10.327255 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 10:14:10.341765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 10:14:10.365574 ignition[752]: Ignition 2.22.0 Sep 13 10:14:10.365609 ignition[752]: Stage: fetch-offline Sep 13 10:14:10.365727 ignition[752]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:14:10.365740 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:14:10.365915 ignition[752]: parsed url from cmdline: "" Sep 13 10:14:10.365920 ignition[752]: no config URL provided Sep 13 10:14:10.365925 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 10:14:10.365935 ignition[752]: no config at "/usr/lib/ignition/user.ign" Sep 13 10:14:10.365980 ignition[752]: op(1): [started] loading QEMU firmware config module Sep 13 10:14:10.365985 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 10:14:10.376792 ignition[752]: op(1): [finished] loading QEMU firmware config module Sep 13 10:14:10.392858 systemd-networkd[855]: lo: Link UP Sep 13 10:14:10.392868 systemd-networkd[855]: lo: Gained carrier Sep 13 10:14:10.394724 systemd-networkd[855]: Enumeration completed Sep 13 10:14:10.395119 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:14:10.395123 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 10:14:10.395318 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 10:14:10.395946 systemd[1]: Reached target network.target - Network. Sep 13 10:14:10.397859 systemd-networkd[855]: eth0: Link UP Sep 13 10:14:10.398115 systemd-networkd[855]: eth0: Gained carrier Sep 13 10:14:10.398125 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:14:10.419300 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 10:14:10.432366 ignition[752]: parsing config with SHA512: 1fafae0468c19294468b36435d7186845c84ccece189489895ff2593e689ca39fdb9d275dce82dbe78a10f24a7f2c1ff8b269c580eaa9c7a808d37b01f247bc4 Sep 13 10:14:10.442060 unknown[752]: fetched base config from "system" Sep 13 10:14:10.442074 unknown[752]: fetched user config from "qemu" Sep 13 10:14:10.442717 ignition[752]: fetch-offline: fetch-offline passed Sep 13 10:14:10.442912 ignition[752]: Ignition finished successfully Sep 13 10:14:10.446469 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 10:14:10.446818 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 10:14:10.447792 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 10:14:10.527818 ignition[863]: Ignition 2.22.0 Sep 13 10:14:10.527831 ignition[863]: Stage: kargs Sep 13 10:14:10.527956 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:14:10.527967 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:14:10.528671 ignition[863]: kargs: kargs passed Sep 13 10:14:10.528715 ignition[863]: Ignition finished successfully Sep 13 10:14:10.536422 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 10:14:10.538500 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 10:14:10.740624 ignition[872]: Ignition 2.22.0 Sep 13 10:14:10.740636 ignition[872]: Stage: disks Sep 13 10:14:10.740796 ignition[872]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:14:10.740808 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:14:10.741555 ignition[872]: disks: disks passed Sep 13 10:14:10.745806 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 10:14:10.741610 ignition[872]: Ignition finished successfully Sep 13 10:14:10.746113 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 10:14:10.746535 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 10:14:10.746844 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 10:14:10.747163 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 10:14:10.747486 systemd[1]: Reached target basic.target - Basic System. Sep 13 10:14:10.748622 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 10:14:10.754601 systemd-resolved[260]: Detected conflict on linux IN A 10.0.0.20 Sep 13 10:14:10.754617 systemd-resolved[260]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Sep 13 10:14:10.875016 systemd-fsck[883]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 13 10:14:10.883610 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 10:14:10.887800 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 10:14:11.013274 kernel: EXT4-fs (vda9): mounted filesystem 1fad58d4-1271-484a-aa8e-8f7f5dca764c r/w with ordered data mode. Quota mode: none. Sep 13 10:14:11.013746 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 10:14:11.015057 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 10:14:11.017923 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 10:14:11.018896 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 10:14:11.020565 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 10:14:11.020606 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 10:14:11.020653 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 10:14:11.033477 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 10:14:11.035961 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 10:14:11.040035 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Sep 13 10:14:11.040066 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:14:11.040263 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:14:11.043261 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:14:11.043285 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:14:11.045386 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 10:14:11.087213 initrd-setup-root[915]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 10:14:11.092637 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Sep 13 10:14:11.097930 initrd-setup-root[929]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 10:14:11.102361 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 10:14:11.390973 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 10:14:11.392733 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 10:14:11.394272 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 10:14:11.440919 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 10:14:11.442488 kernel: BTRFS info (device vda6): last unmount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:14:11.459563 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 10:14:11.477476 ignition[1005]: INFO : Ignition 2.22.0 Sep 13 10:14:11.477476 ignition[1005]: INFO : Stage: mount Sep 13 10:14:11.479140 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:14:11.479140 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:14:11.479140 ignition[1005]: INFO : mount: mount passed Sep 13 10:14:11.479140 ignition[1005]: INFO : Ignition finished successfully Sep 13 10:14:11.485411 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 10:14:11.488212 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 10:14:11.523260 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 10:14:11.536787 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1017) Sep 13 10:14:11.536825 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:14:11.536837 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:14:11.540755 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:14:11.540774 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:14:11.542830 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 10:14:11.580939 ignition[1034]: INFO : Ignition 2.22.0 Sep 13 10:14:11.580939 ignition[1034]: INFO : Stage: files Sep 13 10:14:11.582667 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:14:11.582667 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:14:11.582667 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Sep 13 10:14:11.585853 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 10:14:11.585853 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 10:14:11.589435 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 10:14:11.591190 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 10:14:11.592695 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 10:14:11.591744 unknown[1034]: wrote ssh authorized keys file for user: core Sep 13 10:14:11.596523 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 10:14:11.598705 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 10:14:11.661605 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 10:14:12.114295 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 10:14:12.114295 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 10:14:12.118130 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 10:14:12.239498 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 10:14:12.408150 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 10:14:12.408150 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 10:14:12.411751 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 10:14:12.411751 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 10:14:12.411751 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 10:14:12.411751 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 10:14:12.411751 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 10:14:12.411751 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 10:14:12.411751 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 10:14:12.424462 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 10:14:12.424462 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 10:14:12.424462 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 10:14:12.424462 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 10:14:12.424462 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 10:14:12.424462 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 10:14:12.413607 systemd-networkd[855]: eth0: Gained IPv6LL Sep 13 10:14:12.895833 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 10:14:14.979566 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 10:14:14.979566 ignition[1034]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 10:14:14.983859 ignition[1034]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 10:14:14.986117 ignition[1034]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 10:14:14.986117 ignition[1034]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 10:14:14.986117 ignition[1034]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 10:14:14.990635 ignition[1034]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 10:14:14.990635 ignition[1034]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 10:14:14.990635 ignition[1034]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 10:14:14.990635 ignition[1034]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 10:14:15.072602 ignition[1034]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 10:14:15.077699 ignition[1034]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 10:14:15.079258 ignition[1034]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 10:14:15.079258 ignition[1034]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 10:14:15.079258 ignition[1034]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 10:14:15.079258 ignition[1034]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 10:14:15.079258 ignition[1034]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 10:14:15.079258 ignition[1034]: INFO : files: files passed Sep 13 10:14:15.079258 ignition[1034]: INFO : Ignition finished successfully Sep 13 10:14:15.082573 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 10:14:15.086062 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 10:14:15.088219 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 10:14:15.109847 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 10:14:15.109968 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 10:14:15.112836 initrd-setup-root-after-ignition[1063]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 10:14:15.115305 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:14:15.116986 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:14:15.116986 initrd-setup-root-after-ignition[1065]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:14:15.122687 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 10:14:15.122952 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 10:14:15.126583 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 10:14:15.201524 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 10:14:15.201685 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 10:14:15.203383 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 10:14:15.204934 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 10:14:15.206847 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 10:14:15.207872 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 10:14:15.227658 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 10:14:15.229869 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 10:14:15.264805 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:14:15.266100 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:14:15.268419 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 10:14:15.269599 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 10:14:15.269767 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 10:14:15.271768 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 10:14:15.275224 systemd[1]: Stopped target basic.target - Basic System. Sep 13 10:14:15.277843 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 10:14:15.278101 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 10:14:15.280098 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 10:14:15.280567 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 13 10:14:15.280885 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 10:14:15.281206 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 10:14:15.281773 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 10:14:15.290116 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 10:14:15.291981 systemd[1]: Stopped target swap.target - Swaps. Sep 13 10:14:15.292305 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 10:14:15.292483 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 10:14:15.298154 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:14:15.298326 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:14:15.298762 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 10:14:15.302270 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:14:15.303316 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 10:14:15.303445 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 10:14:15.304064 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 10:14:15.304200 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 10:14:15.309402 systemd[1]: Stopped target paths.target - Path Units. Sep 13 10:14:15.310408 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 10:14:15.315321 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:14:15.315480 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 10:14:15.318060 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 10:14:15.318538 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 10:14:15.318628 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 10:14:15.321377 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 10:14:15.321485 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 10:14:15.321831 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 10:14:15.321945 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 10:14:15.324680 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 10:14:15.324792 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 10:14:15.328529 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 10:14:15.334404 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 10:14:15.335315 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 10:14:15.335437 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:14:15.338485 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 10:14:15.340686 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 10:14:15.348320 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 10:14:15.348446 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 10:14:15.374974 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 10:14:15.397793 ignition[1090]: INFO : Ignition 2.22.0 Sep 13 10:14:15.397793 ignition[1090]: INFO : Stage: umount Sep 13 10:14:15.399713 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:14:15.399713 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:14:15.399713 ignition[1090]: INFO : umount: umount passed Sep 13 10:14:15.399713 ignition[1090]: INFO : Ignition finished successfully Sep 13 10:14:15.403469 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 10:14:15.403628 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 10:14:15.405738 systemd[1]: Stopped target network.target - Network. Sep 13 10:14:15.407302 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 10:14:15.407383 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 10:14:15.409049 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 10:14:15.409098 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 10:14:15.410919 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 10:14:15.410976 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 10:14:15.412816 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 10:14:15.412871 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 10:14:15.414894 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 10:14:15.417859 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 10:14:15.425379 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 10:14:15.426474 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 10:14:15.430754 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 13 10:14:15.430997 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 10:14:15.431144 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 10:14:15.434794 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 13 10:14:15.436260 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 13 10:14:15.438460 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 10:14:15.438546 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:14:15.441430 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 10:14:15.442358 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 10:14:15.442421 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 10:14:15.444516 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 10:14:15.444564 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:14:15.447964 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 10:14:15.448012 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 10:14:15.448909 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 10:14:15.448960 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:14:15.452739 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:14:15.457923 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 10:14:15.458010 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:14:15.465209 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 10:14:15.465473 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:14:15.466769 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 10:14:15.466828 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 10:14:15.468632 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 10:14:15.468676 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:14:15.468919 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 10:14:15.468979 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 10:14:15.473943 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 10:14:15.474011 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 10:14:15.476719 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 10:14:15.476786 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 10:14:15.479941 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 10:14:15.481635 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 13 10:14:15.481705 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:14:15.485062 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 10:14:15.485128 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:14:15.487444 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 10:14:15.487492 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:14:15.489657 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 10:14:15.489712 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:14:15.492129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:14:15.492178 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:14:15.496561 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 13 10:14:15.496621 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 13 10:14:15.496670 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 10:14:15.496718 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:14:15.497107 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 10:14:15.497220 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 10:14:15.508572 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 10:14:15.508691 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 10:14:15.601152 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 10:14:15.601332 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 10:14:15.602660 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 10:14:15.604021 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 10:14:15.604102 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 10:14:15.607892 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 10:14:15.634334 systemd[1]: Switching root. Sep 13 10:14:15.672726 systemd-journald[220]: Journal stopped Sep 13 10:14:17.560934 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 13 10:14:17.561032 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 10:14:17.561050 kernel: SELinux: policy capability open_perms=1 Sep 13 10:14:17.561064 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 10:14:17.561078 kernel: SELinux: policy capability always_check_network=0 Sep 13 10:14:17.561092 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 10:14:17.561203 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 10:14:17.561218 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 10:14:17.561231 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 10:14:17.561270 kernel: SELinux: policy capability userspace_initial_context=0 Sep 13 10:14:17.561293 kernel: audit: type=1403 audit(1757758456.537:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 10:14:17.561314 systemd[1]: Successfully loaded SELinux policy in 65.766ms. Sep 13 10:14:17.561339 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.674ms. Sep 13 10:14:17.561355 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 10:14:17.561370 systemd[1]: Detected virtualization kvm. Sep 13 10:14:17.561386 systemd[1]: Detected architecture x86-64. Sep 13 10:14:17.561401 systemd[1]: Detected first boot. Sep 13 10:14:17.561416 systemd[1]: Initializing machine ID from VM UUID. Sep 13 10:14:17.561429 kernel: Guest personality initialized and is inactive Sep 13 10:14:17.561452 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 10:14:17.561465 kernel: Initialized host personality Sep 13 10:14:17.561479 zram_generator::config[1135]: No configuration found. Sep 13 10:14:17.561496 kernel: NET: Registered PF_VSOCK protocol family Sep 13 10:14:17.561509 systemd[1]: Populated /etc with preset unit settings. Sep 13 10:14:17.561526 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 13 10:14:17.561539 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 10:14:17.561555 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 10:14:17.561579 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 10:14:17.561596 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 10:14:17.561611 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 10:14:17.561625 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 10:14:17.561641 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 10:14:17.561656 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 10:14:17.561672 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 10:14:17.561687 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 10:14:17.561702 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 10:14:17.561724 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:14:17.561740 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:14:17.561757 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 10:14:17.561771 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 10:14:17.561787 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 10:14:17.561801 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 10:14:17.561816 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 10:14:17.561838 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:14:17.561854 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:14:17.561870 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 10:14:17.561890 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 10:14:17.561906 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 10:14:17.561926 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 10:14:17.561941 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:14:17.561955 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 10:14:17.561971 systemd[1]: Reached target slices.target - Slice Units. Sep 13 10:14:17.561986 systemd[1]: Reached target swap.target - Swaps. Sep 13 10:14:17.562009 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 10:14:17.562025 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 10:14:17.562040 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 13 10:14:17.562055 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:14:17.562070 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 10:14:17.562085 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:14:17.562101 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 10:14:17.562115 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 10:14:17.562130 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 10:14:17.562152 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 10:14:17.562168 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:14:17.562183 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 10:14:17.562199 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 10:14:17.562215 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 10:14:17.562229 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 10:14:17.562280 systemd[1]: Reached target machines.target - Containers. Sep 13 10:14:17.562297 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 10:14:17.562320 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:14:17.562336 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 10:14:17.562351 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 10:14:17.562365 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:14:17.562380 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 10:14:17.562393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:14:17.562405 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 10:14:17.562418 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:14:17.562437 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 10:14:17.562450 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 10:14:17.562462 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 10:14:17.562475 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 10:14:17.562496 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 10:14:17.562509 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:14:17.562521 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 10:14:17.562533 kernel: fuse: init (API version 7.41) Sep 13 10:14:17.562544 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 10:14:17.562562 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 10:14:17.562580 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 10:14:17.562592 kernel: loop: module loaded Sep 13 10:14:17.562606 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 13 10:14:17.562622 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 10:14:17.562641 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 10:14:17.562653 systemd[1]: Stopped verity-setup.service. Sep 13 10:14:17.562666 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:14:17.562678 kernel: ACPI: bus type drm_connector registered Sep 13 10:14:17.562689 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 10:14:17.562707 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 10:14:17.562719 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 10:14:17.562731 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 10:14:17.562743 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 10:14:17.562755 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 10:14:17.562767 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:14:17.562811 systemd-journald[1211]: Collecting audit messages is disabled. Sep 13 10:14:17.562836 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 10:14:17.562855 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 10:14:17.562873 systemd-journald[1211]: Journal started Sep 13 10:14:17.562905 systemd-journald[1211]: Runtime Journal (/run/log/journal/e740e0cbac3a4543852ed1779dd918e8) is 6M, max 48.4M, 42.4M free. Sep 13 10:14:17.209406 systemd[1]: Queued start job for default target multi-user.target. Sep 13 10:14:17.235524 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 10:14:17.236028 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 10:14:17.564995 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 10:14:17.566084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:14:17.566352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:14:17.567819 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 10:14:17.568059 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 10:14:17.569567 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:14:17.569802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:14:17.571667 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 10:14:17.571905 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 10:14:17.573440 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:14:17.573678 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:14:17.575464 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 10:14:17.577189 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:14:17.579062 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 10:14:17.580806 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 13 10:14:17.598701 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 10:14:17.622068 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 10:14:17.625305 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 10:14:17.626643 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 10:14:17.626699 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 10:14:17.629183 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 13 10:14:17.641391 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 10:14:17.649198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:14:17.650859 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 10:14:17.654411 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 10:14:17.655652 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 10:14:17.663771 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 10:14:17.665101 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 10:14:17.667894 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:14:17.669403 systemd-journald[1211]: Time spent on flushing to /var/log/journal/e740e0cbac3a4543852ed1779dd918e8 is 34.231ms for 1075 entries. Sep 13 10:14:17.669403 systemd-journald[1211]: System Journal (/var/log/journal/e740e0cbac3a4543852ed1779dd918e8) is 8M, max 195.6M, 187.6M free. Sep 13 10:14:17.711677 systemd-journald[1211]: Received client request to flush runtime journal. Sep 13 10:14:17.711717 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 10:14:17.671992 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 10:14:17.675491 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 10:14:17.680552 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 10:14:17.682536 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:14:17.683973 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 10:14:17.686669 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 10:14:17.697814 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:14:17.699473 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 10:14:17.700875 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 10:14:17.707637 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 13 10:14:17.718972 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 10:14:17.724423 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Sep 13 10:14:17.724439 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Sep 13 10:14:17.730608 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:14:17.733267 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 10:14:17.733998 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 10:14:17.754043 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 13 10:14:17.766270 kernel: loop1: detected capacity change from 0 to 110984 Sep 13 10:14:17.776483 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 10:14:17.779412 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 10:14:17.799710 kernel: loop2: detected capacity change from 0 to 128016 Sep 13 10:14:17.805944 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Sep 13 10:14:17.806397 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Sep 13 10:14:17.812674 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:14:17.829282 kernel: loop3: detected capacity change from 0 to 221472 Sep 13 10:14:17.838298 kernel: loop4: detected capacity change from 0 to 110984 Sep 13 10:14:17.850270 kernel: loop5: detected capacity change from 0 to 128016 Sep 13 10:14:17.859368 (sd-merge)[1282]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 10:14:17.860020 (sd-merge)[1282]: Merged extensions into '/usr'. Sep 13 10:14:17.867066 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 10:14:17.867085 systemd[1]: Reloading... Sep 13 10:14:17.961290 zram_generator::config[1308]: No configuration found. Sep 13 10:14:18.085266 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 10:14:18.181083 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 10:14:18.181537 systemd[1]: Reloading finished in 313 ms. Sep 13 10:14:18.207654 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 10:14:18.209352 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 10:14:18.227795 systemd[1]: Starting ensure-sysext.service... Sep 13 10:14:18.229752 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 10:14:18.244222 systemd[1]: Reload requested from client PID 1345 ('systemctl') (unit ensure-sysext.service)... Sep 13 10:14:18.244272 systemd[1]: Reloading... Sep 13 10:14:18.249495 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 13 10:14:18.249870 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 13 10:14:18.250334 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 10:14:18.250692 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 10:14:18.251685 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 10:14:18.252050 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Sep 13 10:14:18.252186 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Sep 13 10:14:18.256603 systemd-tmpfiles[1346]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 10:14:18.256614 systemd-tmpfiles[1346]: Skipping /boot Sep 13 10:14:18.268507 systemd-tmpfiles[1346]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 10:14:18.268595 systemd-tmpfiles[1346]: Skipping /boot Sep 13 10:14:18.308530 zram_generator::config[1379]: No configuration found. Sep 13 10:14:18.583042 systemd[1]: Reloading finished in 338 ms. Sep 13 10:14:18.594440 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 10:14:18.614071 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:14:18.625131 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 10:14:18.627739 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 10:14:18.630291 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 10:14:18.639435 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 10:14:18.647008 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:14:18.650938 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 10:14:18.657818 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:14:18.658447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:14:18.660405 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:14:18.663749 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:14:18.673092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:14:18.674516 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:14:18.674669 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:14:18.677634 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 10:14:18.678746 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:14:18.681038 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 10:14:18.683404 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:14:18.683857 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:14:18.685832 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:14:18.686607 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:14:18.693926 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:14:18.694381 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:14:18.703702 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:14:18.704020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:14:18.705906 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:14:18.710462 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:14:18.723862 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:14:18.725056 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:14:18.725188 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:14:18.728720 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 10:14:18.729819 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:14:18.732030 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 10:14:18.734579 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:14:18.734872 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:14:18.742828 systemd-udevd[1417]: Using default interface naming scheme 'v255'. Sep 13 10:14:18.744173 augenrules[1449]: No rules Sep 13 10:14:18.746912 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 10:14:18.747692 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 10:14:18.749542 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:14:18.750203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:14:18.752495 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:14:18.752809 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:14:18.754489 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 10:14:18.766544 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 10:14:18.768756 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 10:14:18.775396 systemd[1]: Finished ensure-sysext.service. Sep 13 10:14:18.780862 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:14:18.781174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:14:18.782919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:14:18.785086 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 10:14:18.787479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:14:18.787524 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:14:18.787576 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 10:14:18.793222 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 10:14:18.795166 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 10:14:18.795194 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:14:18.795392 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:14:18.804221 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 10:14:18.806889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:14:18.807148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:14:18.810340 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 10:14:18.814744 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 10:14:18.816366 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 10:14:18.882081 systemd-resolved[1415]: Positive Trust Anchors: Sep 13 10:14:18.882098 systemd-resolved[1415]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 10:14:18.882128 systemd-resolved[1415]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 10:14:18.895064 systemd-resolved[1415]: Defaulting to hostname 'linux'. Sep 13 10:14:18.895761 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 10:14:18.904685 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 10:14:18.906233 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:14:18.942003 systemd-networkd[1491]: lo: Link UP Sep 13 10:14:18.942360 systemd-networkd[1491]: lo: Gained carrier Sep 13 10:14:18.944549 systemd-networkd[1491]: Enumeration completed Sep 13 10:14:18.944962 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 10:14:18.946193 systemd[1]: Reached target network.target - Network. Sep 13 10:14:18.948533 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 13 10:14:18.949968 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:14:18.950047 systemd-networkd[1491]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 10:14:18.951263 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:14:18.951395 systemd-networkd[1491]: eth0: Link UP Sep 13 10:14:18.951662 systemd-networkd[1491]: eth0: Gained carrier Sep 13 10:14:18.952113 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:14:18.953474 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 10:14:18.954741 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 10:14:18.957685 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 10:14:18.960402 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 10:14:18.961751 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 10:14:18.963112 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 13 10:14:18.970422 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 10:14:18.971866 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 10:14:18.971900 systemd[1]: Reached target paths.target - Path Units. Sep 13 10:14:18.974337 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 10:14:18.975603 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 10:14:18.976835 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 10:14:18.978129 systemd[1]: Reached target timers.target - Timer Units. Sep 13 10:14:18.982451 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 10:14:18.985191 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 10:14:18.988311 systemd-networkd[1491]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 10:14:18.989026 systemd-timesyncd[1477]: Network configuration changed, trying to establish connection. Sep 13 10:14:18.990569 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 13 10:14:18.992563 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 13 10:14:20.369013 systemd-timesyncd[1477]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 10:14:20.369076 systemd-timesyncd[1477]: Initial clock synchronization to Sat 2025-09-13 10:14:20.368909 UTC. Sep 13 10:14:20.369118 systemd-resolved[1415]: Clock change detected. Flushing caches. Sep 13 10:14:20.369431 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 13 10:14:20.375485 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 10:14:20.376890 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 13 10:14:20.379611 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 10:14:20.388515 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 10:14:20.390457 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 10:14:20.390544 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 10:14:20.391604 systemd[1]: Reached target basic.target - Basic System. Sep 13 10:14:20.392876 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 10:14:20.392899 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 10:14:20.394025 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 10:14:20.396967 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 10:14:20.399205 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 10:14:20.403097 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 10:14:20.409943 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 10:14:20.411187 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 10:14:20.414893 jq[1519]: false Sep 13 10:14:20.412305 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 13 10:14:20.414912 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 10:14:20.418923 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 10:14:20.424254 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 10:14:20.428460 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 10:14:20.431941 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing passwd entry cache Sep 13 10:14:20.431949 oslogin_cache_refresh[1521]: Refreshing passwd entry cache Sep 13 10:14:20.434877 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 10:14:20.437108 extend-filesystems[1520]: Found /dev/vda6 Sep 13 10:14:20.439414 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 10:14:20.440400 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 10:14:20.440880 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 10:14:20.441967 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 10:14:20.444700 extend-filesystems[1520]: Found /dev/vda9 Sep 13 10:14:20.446667 oslogin_cache_refresh[1521]: Failure getting users, quitting Sep 13 10:14:20.446771 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting users, quitting Sep 13 10:14:20.446771 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 10:14:20.446694 oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 10:14:20.451596 extend-filesystems[1520]: Checking size of /dev/vda9 Sep 13 10:14:20.446869 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 10:14:20.452674 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing group entry cache Sep 13 10:14:20.446789 oslogin_cache_refresh[1521]: Refreshing group entry cache Sep 13 10:14:20.449513 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 13 10:14:20.463919 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting groups, quitting Sep 13 10:14:20.463919 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 10:14:20.464043 jq[1535]: true Sep 13 10:14:20.453847 oslogin_cache_refresh[1521]: Failure getting groups, quitting Sep 13 10:14:20.453956 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 10:14:20.453858 oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 10:14:20.455595 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 10:14:20.455864 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 10:14:20.456220 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 13 10:14:20.456479 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 13 10:14:20.459737 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 10:14:20.460261 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 10:14:20.484165 jq[1544]: true Sep 13 10:14:20.493866 update_engine[1532]: I20250913 10:14:20.492981 1532 main.cc:92] Flatcar Update Engine starting Sep 13 10:14:20.497422 extend-filesystems[1520]: Resized partition /dev/vda9 Sep 13 10:14:20.497834 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 10:14:20.500942 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 10:14:20.501688 extend-filesystems[1564]: resize2fs 1.47.3 (8-Jul-2025) Sep 13 10:14:20.503489 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 10:14:20.505180 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 10:14:20.506723 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 10:14:20.512692 tar[1542]: linux-amd64/helm Sep 13 10:14:20.512969 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 10:14:20.521839 dbus-daemon[1517]: [system] SELinux support is enabled Sep 13 10:14:20.522064 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 10:14:20.526369 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 10:14:20.526404 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 10:14:20.526924 kernel: ACPI: button: Power Button [PWRF] Sep 13 10:14:20.528529 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 10:14:20.528554 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 10:14:20.530559 update_engine[1532]: I20250913 10:14:20.530249 1532 update_check_scheduler.cc:74] Next update check in 3m4s Sep 13 10:14:20.530517 systemd[1]: Started update-engine.service - Update Engine. Sep 13 10:14:20.533741 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 10:14:20.561896 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 10:14:20.572637 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 13 10:14:20.781790 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 10:14:20.782128 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 10:14:20.786921 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 10:14:20.788016 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 10:14:20.788016 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 10:14:20.788016 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 10:14:20.802782 extend-filesystems[1520]: Resized filesystem in /dev/vda9 Sep 13 10:14:20.804784 bash[1581]: Updated "/home/core/.ssh/authorized_keys" Sep 13 10:14:20.788445 systemd-logind[1531]: New seat seat0. Sep 13 10:14:20.791784 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 10:14:20.792236 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 10:14:20.803388 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 10:14:20.806189 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 10:14:20.810296 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 10:14:20.887813 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 10:14:20.895213 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 10:14:20.901118 locksmithd[1580]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 10:14:20.916052 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 10:14:20.916676 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 10:14:20.920196 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 10:14:20.964114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:14:21.021322 systemd-logind[1531]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 10:14:21.021373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:14:21.022467 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:14:21.024504 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 10:14:21.025727 systemd-logind[1531]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 10:14:21.037816 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 10:14:21.042133 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 10:14:21.045165 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 10:14:21.050010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:14:21.110704 kernel: kvm_amd: TSC scaling supported Sep 13 10:14:21.110837 kernel: kvm_amd: Nested Virtualization enabled Sep 13 10:14:21.110860 kernel: kvm_amd: Nested Paging enabled Sep 13 10:14:21.110874 kernel: kvm_amd: LBR virtualization supported Sep 13 10:14:21.115092 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 10:14:21.115593 kernel: kvm_amd: Virtual GIF supported Sep 13 10:14:21.151258 kernel: EDAC MC: Ver: 3.0.0 Sep 13 10:14:21.163590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:14:21.237378 containerd[1559]: time="2025-09-13T10:14:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 13 10:14:21.238749 containerd[1559]: time="2025-09-13T10:14:21.238686921Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 13 10:14:21.254779 containerd[1559]: time="2025-09-13T10:14:21.252832349Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.09µs" Sep 13 10:14:21.254779 containerd[1559]: time="2025-09-13T10:14:21.252879077Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 13 10:14:21.254779 containerd[1559]: time="2025-09-13T10:14:21.252902691Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 13 10:14:21.254779 containerd[1559]: time="2025-09-13T10:14:21.253138443Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 13 10:14:21.254779 containerd[1559]: time="2025-09-13T10:14:21.253155395Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 13 10:14:21.254779 containerd[1559]: time="2025-09-13T10:14:21.253184209Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 10:14:21.254779 containerd[1559]: time="2025-09-13T10:14:21.253253619Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 10:14:21.254779 containerd[1559]: time="2025-09-13T10:14:21.253263929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 10:14:21.254779 containerd[1559]: time="2025-09-13T10:14:21.253570213Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 10:14:21.254779 containerd[1559]: time="2025-09-13T10:14:21.253583348Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 10:14:21.254779 containerd[1559]: time="2025-09-13T10:14:21.253599117Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 10:14:21.254779 containerd[1559]: time="2025-09-13T10:14:21.253618934Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 13 10:14:21.255046 containerd[1559]: time="2025-09-13T10:14:21.253742546Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 13 10:14:21.255046 containerd[1559]: time="2025-09-13T10:14:21.254019045Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 10:14:21.255046 containerd[1559]: time="2025-09-13T10:14:21.254053770Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 10:14:21.255046 containerd[1559]: time="2025-09-13T10:14:21.254063348Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 13 10:14:21.255046 containerd[1559]: time="2025-09-13T10:14:21.254102251Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 13 10:14:21.255046 containerd[1559]: time="2025-09-13T10:14:21.254539461Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 13 10:14:21.255046 containerd[1559]: time="2025-09-13T10:14:21.254609231Z" level=info msg="metadata content store policy set" policy=shared Sep 13 10:14:21.261133 containerd[1559]: time="2025-09-13T10:14:21.261082411Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 13 10:14:21.261133 containerd[1559]: time="2025-09-13T10:14:21.261127065Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 13 10:14:21.261279 containerd[1559]: time="2025-09-13T10:14:21.261151781Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 13 10:14:21.261279 containerd[1559]: time="2025-09-13T10:14:21.261171789Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 13 10:14:21.261279 containerd[1559]: time="2025-09-13T10:14:21.261184663Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 13 10:14:21.261279 containerd[1559]: time="2025-09-13T10:14:21.261194291Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 13 10:14:21.261279 containerd[1559]: time="2025-09-13T10:14:21.261210992Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 13 10:14:21.261279 containerd[1559]: time="2025-09-13T10:14:21.261224517Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 13 10:14:21.261279 containerd[1559]: time="2025-09-13T10:14:21.261234977Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 13 10:14:21.261279 containerd[1559]: time="2025-09-13T10:14:21.261244064Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 13 10:14:21.261279 containerd[1559]: time="2025-09-13T10:14:21.261253081Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 13 10:14:21.261279 containerd[1559]: time="2025-09-13T10:14:21.261265494Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 13 10:14:21.261477 containerd[1559]: time="2025-09-13T10:14:21.261379799Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 13 10:14:21.261477 containerd[1559]: time="2025-09-13T10:14:21.261401089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 13 10:14:21.261477 containerd[1559]: time="2025-09-13T10:14:21.261416998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 13 10:14:21.261477 containerd[1559]: time="2025-09-13T10:14:21.261435223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 13 10:14:21.261477 containerd[1559]: time="2025-09-13T10:14:21.261445732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 13 10:14:21.261477 containerd[1559]: time="2025-09-13T10:14:21.261456032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 13 10:14:21.261477 containerd[1559]: time="2025-09-13T10:14:21.261468415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 13 10:14:21.261610 containerd[1559]: time="2025-09-13T10:14:21.261481680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 13 10:14:21.261610 containerd[1559]: time="2025-09-13T10:14:21.261509121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 13 10:14:21.261610 containerd[1559]: time="2025-09-13T10:14:21.261520693Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 13 10:14:21.261610 containerd[1559]: time="2025-09-13T10:14:21.261530942Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 13 10:14:21.261685 containerd[1559]: time="2025-09-13T10:14:21.261601084Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 13 10:14:21.261685 containerd[1559]: time="2025-09-13T10:14:21.261663992Z" level=info msg="Start snapshots syncer" Sep 13 10:14:21.261685 containerd[1559]: time="2025-09-13T10:14:21.261683809Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 13 10:14:21.262048 containerd[1559]: time="2025-09-13T10:14:21.261991776Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 13 10:14:21.262227 containerd[1559]: time="2025-09-13T10:14:21.262057299Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 13 10:14:21.263778 containerd[1559]: time="2025-09-13T10:14:21.263738131Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 13 10:14:21.263926 containerd[1559]: time="2025-09-13T10:14:21.263897731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 13 10:14:21.263926 containerd[1559]: time="2025-09-13T10:14:21.263923198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 13 10:14:21.263972 containerd[1559]: time="2025-09-13T10:14:21.263933548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 13 10:14:21.263972 containerd[1559]: time="2025-09-13T10:14:21.263944739Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 13 10:14:21.263972 containerd[1559]: time="2025-09-13T10:14:21.263955359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 13 10:14:21.264025 containerd[1559]: time="2025-09-13T10:14:21.263975527Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 13 10:14:21.264025 containerd[1559]: time="2025-09-13T10:14:21.263987369Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 13 10:14:21.264025 containerd[1559]: time="2025-09-13T10:14:21.264012606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 13 10:14:21.264025 containerd[1559]: time="2025-09-13T10:14:21.264023186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 13 10:14:21.264168 containerd[1559]: time="2025-09-13T10:14:21.264042021Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 13 10:14:21.264168 containerd[1559]: time="2025-09-13T10:14:21.264075103Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 10:14:21.264168 containerd[1559]: time="2025-09-13T10:14:21.264088559Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 10:14:21.264168 containerd[1559]: time="2025-09-13T10:14:21.264096584Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 10:14:21.264168 containerd[1559]: time="2025-09-13T10:14:21.264105470Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 10:14:21.264168 containerd[1559]: time="2025-09-13T10:14:21.264113235Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 13 10:14:21.264168 containerd[1559]: time="2025-09-13T10:14:21.264124556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 13 10:14:21.264168 containerd[1559]: time="2025-09-13T10:14:21.264137711Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 13 10:14:21.264168 containerd[1559]: time="2025-09-13T10:14:21.264157648Z" level=info msg="runtime interface created" Sep 13 10:14:21.264168 containerd[1559]: time="2025-09-13T10:14:21.264163880Z" level=info msg="created NRI interface" Sep 13 10:14:21.264168 containerd[1559]: time="2025-09-13T10:14:21.264172125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 13 10:14:21.264689 containerd[1559]: time="2025-09-13T10:14:21.264183116Z" level=info msg="Connect containerd service" Sep 13 10:14:21.264689 containerd[1559]: time="2025-09-13T10:14:21.264204666Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 10:14:21.265075 containerd[1559]: time="2025-09-13T10:14:21.265043389Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 10:14:21.378041 tar[1542]: linux-amd64/LICENSE Sep 13 10:14:21.378223 tar[1542]: linux-amd64/README.md Sep 13 10:14:21.409015 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 10:14:21.563582 containerd[1559]: time="2025-09-13T10:14:21.563455508Z" level=info msg="Start subscribing containerd event" Sep 13 10:14:21.563738 containerd[1559]: time="2025-09-13T10:14:21.563629355Z" level=info msg="Start recovering state" Sep 13 10:14:21.563988 containerd[1559]: time="2025-09-13T10:14:21.563940478Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 10:14:21.563988 containerd[1559]: time="2025-09-13T10:14:21.563971105Z" level=info msg="Start event monitor" Sep 13 10:14:21.564082 containerd[1559]: time="2025-09-13T10:14:21.564007053Z" level=info msg="Start cni network conf syncer for default" Sep 13 10:14:21.564082 containerd[1559]: time="2025-09-13T10:14:21.564020708Z" level=info msg="Start streaming server" Sep 13 10:14:21.564082 containerd[1559]: time="2025-09-13T10:14:21.564031899Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 10:14:21.564082 containerd[1559]: time="2025-09-13T10:14:21.564052298Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 13 10:14:21.564154 containerd[1559]: time="2025-09-13T10:14:21.564067997Z" level=info msg="runtime interface starting up..." Sep 13 10:14:21.564154 containerd[1559]: time="2025-09-13T10:14:21.564140022Z" level=info msg="starting plugins..." Sep 13 10:14:21.564200 containerd[1559]: time="2025-09-13T10:14:21.564158016Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 13 10:14:21.564413 containerd[1559]: time="2025-09-13T10:14:21.564388057Z" level=info msg="containerd successfully booted in 0.327863s" Sep 13 10:14:21.564610 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 10:14:22.300146 systemd-networkd[1491]: eth0: Gained IPv6LL Sep 13 10:14:22.303479 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 10:14:22.305363 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 10:14:22.308009 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 10:14:22.310554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:14:22.331540 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 10:14:22.357977 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 10:14:22.360680 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 10:14:22.361002 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 10:14:22.364423 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 10:14:22.737535 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 10:14:22.740215 systemd[1]: Started sshd@0-10.0.0.20:22-10.0.0.1:59954.service - OpenSSH per-connection server daemon (10.0.0.1:59954). Sep 13 10:14:22.845002 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 59954 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:14:22.847322 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:22.854949 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 10:14:22.857265 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 10:14:22.865169 systemd-logind[1531]: New session 1 of user core. Sep 13 10:14:22.884999 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 10:14:22.889853 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 10:14:22.913515 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 10:14:22.916396 systemd-logind[1531]: New session c1 of user core. Sep 13 10:14:23.070734 systemd[1681]: Queued start job for default target default.target. Sep 13 10:14:23.088299 systemd[1681]: Created slice app.slice - User Application Slice. Sep 13 10:14:23.088332 systemd[1681]: Reached target paths.target - Paths. Sep 13 10:14:23.088382 systemd[1681]: Reached target timers.target - Timers. Sep 13 10:14:23.090079 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 10:14:23.104607 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 10:14:23.104792 systemd[1681]: Reached target sockets.target - Sockets. Sep 13 10:14:23.104844 systemd[1681]: Reached target basic.target - Basic System. Sep 13 10:14:23.104886 systemd[1681]: Reached target default.target - Main User Target. Sep 13 10:14:23.104923 systemd[1681]: Startup finished in 180ms. Sep 13 10:14:23.105336 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 10:14:23.108289 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 10:14:23.225850 systemd[1]: Started sshd@1-10.0.0.20:22-10.0.0.1:59960.service - OpenSSH per-connection server daemon (10.0.0.1:59960). Sep 13 10:14:23.290245 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 59960 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:14:23.291898 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:23.296548 systemd-logind[1531]: New session 2 of user core. Sep 13 10:14:23.306885 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 10:14:23.364858 sshd[1695]: Connection closed by 10.0.0.1 port 59960 Sep 13 10:14:23.366312 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:23.375563 systemd[1]: sshd@1-10.0.0.20:22-10.0.0.1:59960.service: Deactivated successfully. Sep 13 10:14:23.377680 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 10:14:23.378395 systemd-logind[1531]: Session 2 logged out. Waiting for processes to exit. Sep 13 10:14:23.381577 systemd[1]: Started sshd@2-10.0.0.20:22-10.0.0.1:59972.service - OpenSSH per-connection server daemon (10.0.0.1:59972). Sep 13 10:14:23.383851 systemd-logind[1531]: Removed session 2. Sep 13 10:14:23.437015 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 59972 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:14:23.438380 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:23.442897 systemd-logind[1531]: New session 3 of user core. Sep 13 10:14:23.450919 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 10:14:23.635501 sshd[1706]: Connection closed by 10.0.0.1 port 59972 Sep 13 10:14:23.635987 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:23.641409 systemd[1]: sshd@2-10.0.0.20:22-10.0.0.1:59972.service: Deactivated successfully. Sep 13 10:14:23.645340 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 10:14:23.646174 systemd-logind[1531]: Session 3 logged out. Waiting for processes to exit. Sep 13 10:14:23.649140 systemd-logind[1531]: Removed session 3. Sep 13 10:14:23.669072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:23.671115 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 10:14:23.672534 systemd[1]: Startup finished in 3.938s (kernel) + 8.885s (initrd) + 5.824s (userspace) = 18.648s. Sep 13 10:14:23.676949 (kubelet)[1714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:14:24.371538 kubelet[1714]: E0913 10:14:24.371435 1714 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:14:24.375842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:14:24.376067 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:14:24.376506 systemd[1]: kubelet.service: Consumed 1.846s CPU time, 266.6M memory peak. Sep 13 10:14:33.650153 systemd[1]: Started sshd@3-10.0.0.20:22-10.0.0.1:35342.service - OpenSSH per-connection server daemon (10.0.0.1:35342). Sep 13 10:14:33.708685 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 35342 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:14:33.710550 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:33.715983 systemd-logind[1531]: New session 4 of user core. Sep 13 10:14:33.734044 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 10:14:33.790376 sshd[1730]: Connection closed by 10.0.0.1 port 35342 Sep 13 10:14:33.790848 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:33.801639 systemd[1]: sshd@3-10.0.0.20:22-10.0.0.1:35342.service: Deactivated successfully. Sep 13 10:14:33.804181 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 10:14:33.805105 systemd-logind[1531]: Session 4 logged out. Waiting for processes to exit. Sep 13 10:14:33.808889 systemd[1]: Started sshd@4-10.0.0.20:22-10.0.0.1:35354.service - OpenSSH per-connection server daemon (10.0.0.1:35354). Sep 13 10:14:33.809637 systemd-logind[1531]: Removed session 4. Sep 13 10:14:33.865310 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 35354 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:14:33.866866 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:33.871540 systemd-logind[1531]: New session 5 of user core. Sep 13 10:14:33.888878 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 10:14:33.938829 sshd[1739]: Connection closed by 10.0.0.1 port 35354 Sep 13 10:14:33.939461 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:33.954420 systemd[1]: sshd@4-10.0.0.20:22-10.0.0.1:35354.service: Deactivated successfully. Sep 13 10:14:33.956476 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 10:14:33.957329 systemd-logind[1531]: Session 5 logged out. Waiting for processes to exit. Sep 13 10:14:33.960228 systemd[1]: Started sshd@5-10.0.0.20:22-10.0.0.1:35360.service - OpenSSH per-connection server daemon (10.0.0.1:35360). Sep 13 10:14:33.960977 systemd-logind[1531]: Removed session 5. Sep 13 10:14:34.013131 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 35360 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:14:34.014379 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:34.018549 systemd-logind[1531]: New session 6 of user core. Sep 13 10:14:34.031873 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 10:14:34.084812 sshd[1748]: Connection closed by 10.0.0.1 port 35360 Sep 13 10:14:34.085201 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:34.097108 systemd[1]: sshd@5-10.0.0.20:22-10.0.0.1:35360.service: Deactivated successfully. Sep 13 10:14:34.098744 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 10:14:34.099472 systemd-logind[1531]: Session 6 logged out. Waiting for processes to exit. Sep 13 10:14:34.101927 systemd[1]: Started sshd@6-10.0.0.20:22-10.0.0.1:35364.service - OpenSSH per-connection server daemon (10.0.0.1:35364). Sep 13 10:14:34.102503 systemd-logind[1531]: Removed session 6. Sep 13 10:14:34.165236 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 35364 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:14:34.166484 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:34.170400 systemd-logind[1531]: New session 7 of user core. Sep 13 10:14:34.179892 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 10:14:34.236944 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 10:14:34.237264 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:14:34.254439 sudo[1758]: pam_unix(sudo:session): session closed for user root Sep 13 10:14:34.256151 sshd[1757]: Connection closed by 10.0.0.1 port 35364 Sep 13 10:14:34.256516 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:34.268230 systemd[1]: sshd@6-10.0.0.20:22-10.0.0.1:35364.service: Deactivated successfully. Sep 13 10:14:34.269981 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 10:14:34.270672 systemd-logind[1531]: Session 7 logged out. Waiting for processes to exit. Sep 13 10:14:34.273341 systemd[1]: Started sshd@7-10.0.0.20:22-10.0.0.1:35378.service - OpenSSH per-connection server daemon (10.0.0.1:35378). Sep 13 10:14:34.273924 systemd-logind[1531]: Removed session 7. Sep 13 10:14:34.321596 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 35378 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:14:34.322942 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:34.327347 systemd-logind[1531]: New session 8 of user core. Sep 13 10:14:34.340896 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 10:14:34.393659 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 10:14:34.393989 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:14:34.394868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 10:14:34.396252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:14:34.400997 sudo[1769]: pam_unix(sudo:session): session closed for user root Sep 13 10:14:34.407556 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 13 10:14:34.407879 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:14:34.417888 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 10:14:34.454985 augenrules[1794]: No rules Sep 13 10:14:34.456927 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 10:14:34.457221 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 10:14:34.458588 sudo[1768]: pam_unix(sudo:session): session closed for user root Sep 13 10:14:34.460203 sshd[1767]: Connection closed by 10.0.0.1 port 35378 Sep 13 10:14:34.461451 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:34.473712 systemd[1]: sshd@7-10.0.0.20:22-10.0.0.1:35378.service: Deactivated successfully. Sep 13 10:14:34.475850 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 10:14:34.476655 systemd-logind[1531]: Session 8 logged out. Waiting for processes to exit. Sep 13 10:14:34.479987 systemd[1]: Started sshd@8-10.0.0.20:22-10.0.0.1:35390.service - OpenSSH per-connection server daemon (10.0.0.1:35390). Sep 13 10:14:34.480782 systemd-logind[1531]: Removed session 8. Sep 13 10:14:34.527701 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 35390 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:14:34.529328 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:34.533853 systemd-logind[1531]: New session 9 of user core. Sep 13 10:14:34.540897 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 10:14:34.593889 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 10:14:34.594207 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:14:34.663790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:34.669789 (kubelet)[1822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:14:34.739820 kubelet[1822]: E0913 10:14:34.739165 1822 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:14:34.756605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:14:34.756838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:14:34.757267 systemd[1]: kubelet.service: Consumed 331ms CPU time, 116.7M memory peak. Sep 13 10:14:35.116566 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 10:14:35.143434 (dockerd)[1840]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 10:14:35.516241 dockerd[1840]: time="2025-09-13T10:14:35.516105638Z" level=info msg="Starting up" Sep 13 10:14:35.516896 dockerd[1840]: time="2025-09-13T10:14:35.516875091Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 13 10:14:35.536877 dockerd[1840]: time="2025-09-13T10:14:35.536824263Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 13 10:14:35.878714 dockerd[1840]: time="2025-09-13T10:14:35.878654867Z" level=info msg="Loading containers: start." Sep 13 10:14:35.887785 kernel: Initializing XFRM netlink socket Sep 13 10:14:36.158969 systemd-networkd[1491]: docker0: Link UP Sep 13 10:14:36.164987 dockerd[1840]: time="2025-09-13T10:14:36.164937109Z" level=info msg="Loading containers: done." Sep 13 10:14:36.180725 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1497895023-merged.mount: Deactivated successfully. Sep 13 10:14:36.181381 dockerd[1840]: time="2025-09-13T10:14:36.181332375Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 10:14:36.181456 dockerd[1840]: time="2025-09-13T10:14:36.181436190Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 13 10:14:36.181586 dockerd[1840]: time="2025-09-13T10:14:36.181562196Z" level=info msg="Initializing buildkit" Sep 13 10:14:36.212641 dockerd[1840]: time="2025-09-13T10:14:36.212596056Z" level=info msg="Completed buildkit initialization" Sep 13 10:14:36.217800 dockerd[1840]: time="2025-09-13T10:14:36.217730705Z" level=info msg="Daemon has completed initialization" Sep 13 10:14:36.217911 dockerd[1840]: time="2025-09-13T10:14:36.217809513Z" level=info msg="API listen on /run/docker.sock" Sep 13 10:14:36.217982 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 10:14:37.146176 containerd[1559]: time="2025-09-13T10:14:37.146111992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 10:14:37.755049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2934570237.mount: Deactivated successfully. Sep 13 10:14:39.120417 containerd[1559]: time="2025-09-13T10:14:39.120345654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:39.120840 containerd[1559]: time="2025-09-13T10:14:39.120789687Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 13 10:14:39.121993 containerd[1559]: time="2025-09-13T10:14:39.121950454Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:39.124745 containerd[1559]: time="2025-09-13T10:14:39.124717744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:39.125797 containerd[1559]: time="2025-09-13T10:14:39.125766450Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.979570572s" Sep 13 10:14:39.125838 containerd[1559]: time="2025-09-13T10:14:39.125800314Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 10:14:39.126427 containerd[1559]: time="2025-09-13T10:14:39.126384309Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 10:14:40.882377 containerd[1559]: time="2025-09-13T10:14:40.882301220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:40.883496 containerd[1559]: time="2025-09-13T10:14:40.883407294Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 13 10:14:40.884967 containerd[1559]: time="2025-09-13T10:14:40.884936672Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:40.887903 containerd[1559]: time="2025-09-13T10:14:40.887858051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:40.888963 containerd[1559]: time="2025-09-13T10:14:40.888911156Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.762496209s" Sep 13 10:14:40.889028 containerd[1559]: time="2025-09-13T10:14:40.888972531Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 10:14:40.889587 containerd[1559]: time="2025-09-13T10:14:40.889557939Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 10:14:42.755894 containerd[1559]: time="2025-09-13T10:14:42.755791593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:42.756641 containerd[1559]: time="2025-09-13T10:14:42.756574632Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 13 10:14:42.758207 containerd[1559]: time="2025-09-13T10:14:42.758163762Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:42.762313 containerd[1559]: time="2025-09-13T10:14:42.762251227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:42.763472 containerd[1559]: time="2025-09-13T10:14:42.763415501Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.873822817s" Sep 13 10:14:42.763472 containerd[1559]: time="2025-09-13T10:14:42.763464753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 10:14:42.764067 containerd[1559]: time="2025-09-13T10:14:42.764041705Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 10:14:43.976485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount34921259.mount: Deactivated successfully. Sep 13 10:14:44.863129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 10:14:44.864957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:14:45.492074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:45.510128 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:14:45.591741 containerd[1559]: time="2025-09-13T10:14:45.591654458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:45.592828 containerd[1559]: time="2025-09-13T10:14:45.592779949Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 13 10:14:45.594350 containerd[1559]: time="2025-09-13T10:14:45.594279381Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:45.596741 containerd[1559]: time="2025-09-13T10:14:45.596709058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:45.597524 containerd[1559]: time="2025-09-13T10:14:45.597347375Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.833272728s" Sep 13 10:14:45.597524 containerd[1559]: time="2025-09-13T10:14:45.597398931Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 10:14:45.598131 containerd[1559]: time="2025-09-13T10:14:45.598016830Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 10:14:45.618497 kubelet[2140]: E0913 10:14:45.618422 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:14:45.622868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:14:45.623116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:14:45.623515 systemd[1]: kubelet.service: Consumed 287ms CPU time, 111.3M memory peak. Sep 13 10:14:46.903873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488597610.mount: Deactivated successfully. Sep 13 10:14:47.837551 containerd[1559]: time="2025-09-13T10:14:47.837476106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:47.838474 containerd[1559]: time="2025-09-13T10:14:47.838417150Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 10:14:47.839568 containerd[1559]: time="2025-09-13T10:14:47.839531260Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:47.842352 containerd[1559]: time="2025-09-13T10:14:47.842279143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:47.843279 containerd[1559]: time="2025-09-13T10:14:47.843243542Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.245200292s" Sep 13 10:14:47.843279 containerd[1559]: time="2025-09-13T10:14:47.843276073Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 10:14:47.843828 containerd[1559]: time="2025-09-13T10:14:47.843796980Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 10:14:48.290963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932862575.mount: Deactivated successfully. Sep 13 10:14:48.298803 containerd[1559]: time="2025-09-13T10:14:48.298704383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:14:48.299481 containerd[1559]: time="2025-09-13T10:14:48.299445283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 10:14:48.300770 containerd[1559]: time="2025-09-13T10:14:48.300711087Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:14:48.303048 containerd[1559]: time="2025-09-13T10:14:48.302999729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:14:48.303771 containerd[1559]: time="2025-09-13T10:14:48.303708818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 459.88074ms" Sep 13 10:14:48.303771 containerd[1559]: time="2025-09-13T10:14:48.303743503Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 10:14:48.304347 containerd[1559]: time="2025-09-13T10:14:48.304319914Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 10:14:48.952260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544572180.mount: Deactivated successfully. Sep 13 10:14:51.608179 containerd[1559]: time="2025-09-13T10:14:51.608095433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:51.608886 containerd[1559]: time="2025-09-13T10:14:51.608800946Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 13 10:14:51.610030 containerd[1559]: time="2025-09-13T10:14:51.609986289Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:51.613239 containerd[1559]: time="2025-09-13T10:14:51.613196259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:51.614695 containerd[1559]: time="2025-09-13T10:14:51.614661677Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.310312227s" Sep 13 10:14:51.614695 containerd[1559]: time="2025-09-13T10:14:51.614693286Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 10:14:54.889675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:54.889925 systemd[1]: kubelet.service: Consumed 287ms CPU time, 111.3M memory peak. Sep 13 10:14:54.892168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:14:54.926081 systemd[1]: Reload requested from client PID 2289 ('systemctl') (unit session-9.scope)... Sep 13 10:14:54.926098 systemd[1]: Reloading... Sep 13 10:14:55.025805 zram_generator::config[2337]: No configuration found. Sep 13 10:14:55.527431 systemd[1]: Reloading finished in 600 ms. Sep 13 10:14:55.593456 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 10:14:55.593557 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 10:14:55.593881 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:55.593925 systemd[1]: kubelet.service: Consumed 155ms CPU time, 98.3M memory peak. Sep 13 10:14:55.597240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:14:55.780493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:55.785448 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 10:14:55.965581 kubelet[2380]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:14:55.965581 kubelet[2380]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 10:14:55.965581 kubelet[2380]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:14:55.966109 kubelet[2380]: I0913 10:14:55.965679 2380 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 10:14:56.217442 kubelet[2380]: I0913 10:14:56.217364 2380 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 10:14:56.217442 kubelet[2380]: I0913 10:14:56.217411 2380 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 10:14:56.217837 kubelet[2380]: I0913 10:14:56.217807 2380 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 10:14:56.259961 kubelet[2380]: I0913 10:14:56.259876 2380 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:14:56.262470 kubelet[2380]: E0913 10:14:56.262413 2380 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:14:56.311927 kubelet[2380]: I0913 10:14:56.311882 2380 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 10:14:56.323101 kubelet[2380]: I0913 10:14:56.323049 2380 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 10:14:56.324146 kubelet[2380]: I0913 10:14:56.324117 2380 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 10:14:56.324368 kubelet[2380]: I0913 10:14:56.324308 2380 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 10:14:56.324565 kubelet[2380]: I0913 10:14:56.324342 2380 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 10:14:56.324718 kubelet[2380]: I0913 10:14:56.324581 2380 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 10:14:56.324718 kubelet[2380]: I0913 10:14:56.324591 2380 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 10:14:56.324796 kubelet[2380]: I0913 10:14:56.324750 2380 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:14:56.336223 kubelet[2380]: I0913 10:14:56.336163 2380 kubelet.go:408] "Attempting to sync node with API server" Sep 13 10:14:56.336223 kubelet[2380]: I0913 10:14:56.336216 2380 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 10:14:56.336313 kubelet[2380]: I0913 10:14:56.336278 2380 kubelet.go:314] "Adding apiserver pod source" Sep 13 10:14:56.336346 kubelet[2380]: I0913 10:14:56.336318 2380 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 10:14:56.338060 kubelet[2380]: W0913 10:14:56.337956 2380 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 10:14:56.338060 kubelet[2380]: E0913 10:14:56.338055 2380 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:14:56.338060 kubelet[2380]: W0913 10:14:56.337956 2380 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 10:14:56.338293 kubelet[2380]: E0913 10:14:56.338093 2380 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:14:56.344799 kubelet[2380]: I0913 10:14:56.344776 2380 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 10:14:56.345344 kubelet[2380]: I0913 10:14:56.345313 2380 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 10:14:56.345405 kubelet[2380]: W0913 10:14:56.345393 2380 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 10:14:56.349334 kubelet[2380]: I0913 10:14:56.349288 2380 server.go:1274] "Started kubelet" Sep 13 10:14:56.349872 kubelet[2380]: I0913 10:14:56.349820 2380 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 10:14:56.350371 kubelet[2380]: I0913 10:14:56.350343 2380 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 10:14:56.350474 kubelet[2380]: I0913 10:14:56.350443 2380 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 10:14:56.351094 kubelet[2380]: I0913 10:14:56.351071 2380 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 10:14:56.351728 kubelet[2380]: I0913 10:14:56.351691 2380 server.go:449] "Adding debug handlers to kubelet server" Sep 13 10:14:56.357657 kubelet[2380]: E0913 10:14:56.356469 2380 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864d00a17d26dcc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 10:14:56.349253068 +0000 UTC m=+0.559590225,LastTimestamp:2025-09-13 10:14:56.349253068 +0000 UTC m=+0.559590225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 10:14:56.358440 kubelet[2380]: I0913 10:14:56.358387 2380 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 10:14:56.360118 kubelet[2380]: I0913 10:14:56.360098 2380 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 10:14:56.360388 kubelet[2380]: E0913 10:14:56.360364 2380 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:56.361507 kubelet[2380]: E0913 10:14:56.361452 2380 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="200ms" Sep 13 10:14:56.361582 kubelet[2380]: I0913 10:14:56.361521 2380 reconciler.go:26] "Reconciler: start to sync state" Sep 13 10:14:56.361582 kubelet[2380]: I0913 10:14:56.361565 2380 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 10:14:56.362075 kubelet[2380]: I0913 10:14:56.362013 2380 factory.go:221] Registration of the systemd container factory successfully Sep 13 10:14:56.362568 kubelet[2380]: I0913 10:14:56.362132 2380 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 10:14:56.362568 kubelet[2380]: W0913 10:14:56.362402 2380 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 10:14:56.362568 kubelet[2380]: E0913 10:14:56.362444 2380 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 10:14:56.363046 kubelet[2380]: E0913 10:14:56.363009 2380 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:14:56.363602 kubelet[2380]: I0913 10:14:56.363557 2380 factory.go:221] Registration of the containerd container factory successfully Sep 13 10:14:56.377481 kubelet[2380]: I0913 10:14:56.377413 2380 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 10:14:56.381185 kubelet[2380]: I0913 10:14:56.381133 2380 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 10:14:56.381185 kubelet[2380]: I0913 10:14:56.381183 2380 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 10:14:56.381277 kubelet[2380]: I0913 10:14:56.381225 2380 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 10:14:56.381329 kubelet[2380]: E0913 10:14:56.381278 2380 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 10:14:56.383011 kubelet[2380]: W0913 10:14:56.382006 2380 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 10:14:56.383011 kubelet[2380]: E0913 10:14:56.382053 2380 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:14:56.383148 kubelet[2380]: I0913 10:14:56.383120 2380 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 10:14:56.383198 kubelet[2380]: I0913 10:14:56.383141 2380 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 10:14:56.383286 kubelet[2380]: I0913 10:14:56.383210 2380 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:14:56.460883 kubelet[2380]: E0913 10:14:56.460819 2380 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:56.482370 kubelet[2380]: E0913 10:14:56.482275 2380 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 10:14:56.561801 kubelet[2380]: E0913 10:14:56.561703 2380 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:56.562429 kubelet[2380]: E0913 10:14:56.562379 2380 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="400ms" Sep 13 10:14:56.662508 kubelet[2380]: E0913 10:14:56.662464 2380 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:56.682842 kubelet[2380]: E0913 10:14:56.682793 2380 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 10:14:56.763453 kubelet[2380]: E0913 10:14:56.763366 2380 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:56.863969 kubelet[2380]: I0913 10:14:56.863943 2380 policy_none.go:49] "None policy: Start" Sep 13 10:14:56.864115 kubelet[2380]: E0913 10:14:56.864073 2380 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:56.864739 kubelet[2380]: I0913 10:14:56.864717 2380 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 10:14:56.864823 kubelet[2380]: I0913 10:14:56.864749 2380 state_mem.go:35] "Initializing new in-memory state store" Sep 13 10:14:56.873773 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 10:14:56.887920 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 10:14:56.891120 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 10:14:56.901707 kubelet[2380]: I0913 10:14:56.901670 2380 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 10:14:56.902026 kubelet[2380]: I0913 10:14:56.902007 2380 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 10:14:56.902078 kubelet[2380]: I0913 10:14:56.902031 2380 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 10:14:56.902378 kubelet[2380]: I0913 10:14:56.902340 2380 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 10:14:56.903957 kubelet[2380]: E0913 10:14:56.903931 2380 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 10:14:56.963362 kubelet[2380]: E0913 10:14:56.963310 2380 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="800ms" Sep 13 10:14:57.003569 kubelet[2380]: I0913 10:14:57.003519 2380 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 10:14:57.003976 kubelet[2380]: E0913 10:14:57.003838 2380 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Sep 13 10:14:57.092559 systemd[1]: Created slice kubepods-burstable-podd8947ac49c84da6ea61936d8da8386dd.slice - libcontainer container kubepods-burstable-podd8947ac49c84da6ea61936d8da8386dd.slice. Sep 13 10:14:57.121637 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 13 10:14:57.125722 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 13 10:14:57.165986 kubelet[2380]: I0913 10:14:57.165920 2380 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8947ac49c84da6ea61936d8da8386dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d8947ac49c84da6ea61936d8da8386dd\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:57.165986 kubelet[2380]: I0913 10:14:57.165967 2380 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:57.165986 kubelet[2380]: I0913 10:14:57.165992 2380 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:57.166132 kubelet[2380]: I0913 10:14:57.166018 2380 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8947ac49c84da6ea61936d8da8386dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d8947ac49c84da6ea61936d8da8386dd\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:57.166132 kubelet[2380]: I0913 10:14:57.166040 2380 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8947ac49c84da6ea61936d8da8386dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d8947ac49c84da6ea61936d8da8386dd\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:57.166132 kubelet[2380]: I0913 10:14:57.166057 2380 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:57.166132 kubelet[2380]: I0913 10:14:57.166074 2380 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:57.166132 kubelet[2380]: I0913 10:14:57.166090 2380 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:57.166282 kubelet[2380]: I0913 10:14:57.166106 2380 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 10:14:57.205216 kubelet[2380]: I0913 10:14:57.205185 2380 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 10:14:57.205655 kubelet[2380]: E0913 10:14:57.205608 2380 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Sep 13 10:14:57.322978 kubelet[2380]: W0913 10:14:57.322893 2380 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 10:14:57.322978 kubelet[2380]: E0913 10:14:57.322981 2380 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:14:57.419058 kubelet[2380]: E0913 10:14:57.419013 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:57.419716 containerd[1559]: time="2025-09-13T10:14:57.419663700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d8947ac49c84da6ea61936d8da8386dd,Namespace:kube-system,Attempt:0,}" Sep 13 10:14:57.424895 kubelet[2380]: E0913 10:14:57.424845 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:57.425374 containerd[1559]: time="2025-09-13T10:14:57.425333824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 10:14:57.428708 kubelet[2380]: E0913 10:14:57.428650 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:57.429204 containerd[1559]: time="2025-09-13T10:14:57.429171181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 10:14:57.466432 containerd[1559]: time="2025-09-13T10:14:57.466062434Z" level=info msg="connecting to shim 353b412324013b9cf9537e3a61edfd1ee23cf2b6d702922a5f24f93c070dd4fb" address="unix:///run/containerd/s/fed5b471943038a1b433089a2bb1ba8bd517fd85874d4f88b388d35a89c1359c" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:14:57.479784 containerd[1559]: time="2025-09-13T10:14:57.478933805Z" level=info msg="connecting to shim f334491f6e10f89702b9b1619290d699c1abfc6cf3498a5973ab860992250b80" address="unix:///run/containerd/s/4770c16a90d0cbba60e70662fe56ff7ec3712551a3318afebbbbd4faf67a559b" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:14:57.480485 containerd[1559]: time="2025-09-13T10:14:57.480431862Z" level=info msg="connecting to shim ca42de14f9f9f55006d6088d0fd1bac5cf36fa6ac2543d544519275fcc9fab16" address="unix:///run/containerd/s/f9e60c5ca8e7b0bf201c728e6d64b718f1bca0f505a6e737494986bde8375dff" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:14:57.512992 systemd[1]: Started cri-containerd-353b412324013b9cf9537e3a61edfd1ee23cf2b6d702922a5f24f93c070dd4fb.scope - libcontainer container 353b412324013b9cf9537e3a61edfd1ee23cf2b6d702922a5f24f93c070dd4fb. Sep 13 10:14:57.523697 systemd[1]: Started cri-containerd-ca42de14f9f9f55006d6088d0fd1bac5cf36fa6ac2543d544519275fcc9fab16.scope - libcontainer container ca42de14f9f9f55006d6088d0fd1bac5cf36fa6ac2543d544519275fcc9fab16. Sep 13 10:14:57.541980 systemd[1]: Started cri-containerd-f334491f6e10f89702b9b1619290d699c1abfc6cf3498a5973ab860992250b80.scope - libcontainer container f334491f6e10f89702b9b1619290d699c1abfc6cf3498a5973ab860992250b80. Sep 13 10:14:57.578504 containerd[1559]: time="2025-09-13T10:14:57.578451829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d8947ac49c84da6ea61936d8da8386dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"353b412324013b9cf9537e3a61edfd1ee23cf2b6d702922a5f24f93c070dd4fb\"" Sep 13 10:14:57.580776 kubelet[2380]: E0913 10:14:57.579795 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:57.586601 containerd[1559]: time="2025-09-13T10:14:57.586567225Z" level=info msg="CreateContainer within sandbox \"353b412324013b9cf9537e3a61edfd1ee23cf2b6d702922a5f24f93c070dd4fb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 10:14:57.602190 containerd[1559]: time="2025-09-13T10:14:57.602148353Z" level=info msg="Container c2afeb4a00f36fdde89e784d31e0364e26935e664aff8991ef64282d86753132: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:14:57.609174 kubelet[2380]: I0913 10:14:57.609145 2380 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 10:14:57.609566 kubelet[2380]: E0913 10:14:57.609534 2380 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Sep 13 10:14:57.610309 containerd[1559]: time="2025-09-13T10:14:57.610278858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f334491f6e10f89702b9b1619290d699c1abfc6cf3498a5973ab860992250b80\"" Sep 13 10:14:57.611033 kubelet[2380]: E0913 10:14:57.611011 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:57.611987 containerd[1559]: time="2025-09-13T10:14:57.611564930Z" level=info msg="CreateContainer within sandbox \"353b412324013b9cf9537e3a61edfd1ee23cf2b6d702922a5f24f93c070dd4fb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c2afeb4a00f36fdde89e784d31e0364e26935e664aff8991ef64282d86753132\"" Sep 13 10:14:57.612232 containerd[1559]: time="2025-09-13T10:14:57.612202581Z" level=info msg="StartContainer for \"c2afeb4a00f36fdde89e784d31e0364e26935e664aff8991ef64282d86753132\"" Sep 13 10:14:57.612826 containerd[1559]: time="2025-09-13T10:14:57.612767943Z" level=info msg="CreateContainer within sandbox \"f334491f6e10f89702b9b1619290d699c1abfc6cf3498a5973ab860992250b80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 10:14:57.613968 containerd[1559]: time="2025-09-13T10:14:57.613943614Z" level=info msg="connecting to shim c2afeb4a00f36fdde89e784d31e0364e26935e664aff8991ef64282d86753132" address="unix:///run/containerd/s/fed5b471943038a1b433089a2bb1ba8bd517fd85874d4f88b388d35a89c1359c" protocol=ttrpc version=3 Sep 13 10:14:57.620312 containerd[1559]: time="2025-09-13T10:14:57.620283761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca42de14f9f9f55006d6088d0fd1bac5cf36fa6ac2543d544519275fcc9fab16\"" Sep 13 10:14:57.620913 kubelet[2380]: W0913 10:14:57.620808 2380 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 10:14:57.620913 kubelet[2380]: E0913 10:14:57.620881 2380 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:14:57.621267 kubelet[2380]: E0913 10:14:57.621232 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:57.622955 containerd[1559]: time="2025-09-13T10:14:57.622921962Z" level=info msg="CreateContainer within sandbox \"ca42de14f9f9f55006d6088d0fd1bac5cf36fa6ac2543d544519275fcc9fab16\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 10:14:57.624682 containerd[1559]: time="2025-09-13T10:14:57.624097903Z" level=info msg="Container c2077e394276c4b93312c7daed750d319ac5800be858f2b70ab87633d94cf2f6: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:14:57.634202 containerd[1559]: time="2025-09-13T10:14:57.634164193Z" level=info msg="Container 73969b39f4976fc845ece9f3b6e2971a5190d42d22191b55bf44ba92f31ba8ee: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:14:57.635951 containerd[1559]: time="2025-09-13T10:14:57.635917159Z" level=info msg="CreateContainer within sandbox \"f334491f6e10f89702b9b1619290d699c1abfc6cf3498a5973ab860992250b80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c2077e394276c4b93312c7daed750d319ac5800be858f2b70ab87633d94cf2f6\"" Sep 13 10:14:57.636088 systemd[1]: Started cri-containerd-c2afeb4a00f36fdde89e784d31e0364e26935e664aff8991ef64282d86753132.scope - libcontainer container c2afeb4a00f36fdde89e784d31e0364e26935e664aff8991ef64282d86753132. Sep 13 10:14:57.637140 containerd[1559]: time="2025-09-13T10:14:57.636588644Z" level=info msg="StartContainer for \"c2077e394276c4b93312c7daed750d319ac5800be858f2b70ab87633d94cf2f6\"" Sep 13 10:14:57.637995 containerd[1559]: time="2025-09-13T10:14:57.637968897Z" level=info msg="connecting to shim c2077e394276c4b93312c7daed750d319ac5800be858f2b70ab87633d94cf2f6" address="unix:///run/containerd/s/4770c16a90d0cbba60e70662fe56ff7ec3712551a3318afebbbbd4faf67a559b" protocol=ttrpc version=3 Sep 13 10:14:57.642504 containerd[1559]: time="2025-09-13T10:14:57.642464684Z" level=info msg="CreateContainer within sandbox \"ca42de14f9f9f55006d6088d0fd1bac5cf36fa6ac2543d544519275fcc9fab16\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"73969b39f4976fc845ece9f3b6e2971a5190d42d22191b55bf44ba92f31ba8ee\"" Sep 13 10:14:57.642960 containerd[1559]: time="2025-09-13T10:14:57.642926858Z" level=info msg="StartContainer for \"73969b39f4976fc845ece9f3b6e2971a5190d42d22191b55bf44ba92f31ba8ee\"" Sep 13 10:14:57.643884 containerd[1559]: time="2025-09-13T10:14:57.643839636Z" level=info msg="connecting to shim 73969b39f4976fc845ece9f3b6e2971a5190d42d22191b55bf44ba92f31ba8ee" address="unix:///run/containerd/s/f9e60c5ca8e7b0bf201c728e6d64b718f1bca0f505a6e737494986bde8375dff" protocol=ttrpc version=3 Sep 13 10:14:57.659899 systemd[1]: Started cri-containerd-c2077e394276c4b93312c7daed750d319ac5800be858f2b70ab87633d94cf2f6.scope - libcontainer container c2077e394276c4b93312c7daed750d319ac5800be858f2b70ab87633d94cf2f6. Sep 13 10:14:57.663861 systemd[1]: Started cri-containerd-73969b39f4976fc845ece9f3b6e2971a5190d42d22191b55bf44ba92f31ba8ee.scope - libcontainer container 73969b39f4976fc845ece9f3b6e2971a5190d42d22191b55bf44ba92f31ba8ee. Sep 13 10:14:57.723250 containerd[1559]: time="2025-09-13T10:14:57.715656650Z" level=info msg="StartContainer for \"c2afeb4a00f36fdde89e784d31e0364e26935e664aff8991ef64282d86753132\" returns successfully" Sep 13 10:14:57.723365 kubelet[2380]: W0913 10:14:57.716607 2380 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 10:14:57.723365 kubelet[2380]: E0913 10:14:57.716648 2380 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:14:57.752928 containerd[1559]: time="2025-09-13T10:14:57.752886582Z" level=info msg="StartContainer for \"73969b39f4976fc845ece9f3b6e2971a5190d42d22191b55bf44ba92f31ba8ee\" returns successfully" Sep 13 10:14:57.764690 kubelet[2380]: E0913 10:14:57.764626 2380 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="1.6s" Sep 13 10:14:57.767453 containerd[1559]: time="2025-09-13T10:14:57.767419333Z" level=info msg="StartContainer for \"c2077e394276c4b93312c7daed750d319ac5800be858f2b70ab87633d94cf2f6\" returns successfully" Sep 13 10:14:58.390643 kubelet[2380]: E0913 10:14:58.390600 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:58.393500 kubelet[2380]: E0913 10:14:58.393153 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:58.394602 kubelet[2380]: E0913 10:14:58.394581 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:58.411957 kubelet[2380]: I0913 10:14:58.411916 2380 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 10:14:58.991166 kubelet[2380]: I0913 10:14:58.991104 2380 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 10:14:58.991166 kubelet[2380]: E0913 10:14:58.991151 2380 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 10:14:59.338310 kubelet[2380]: I0913 10:14:59.338172 2380 apiserver.go:52] "Watching apiserver" Sep 13 10:14:59.361852 kubelet[2380]: I0913 10:14:59.361828 2380 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 10:14:59.400450 kubelet[2380]: E0913 10:14:59.400396 2380 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:59.400927 kubelet[2380]: E0913 10:14:59.400571 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:00.402350 kubelet[2380]: E0913 10:15:00.402263 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:00.805384 systemd[1]: Reload requested from client PID 2654 ('systemctl') (unit session-9.scope)... Sep 13 10:15:00.805403 systemd[1]: Reloading... Sep 13 10:15:00.929789 zram_generator::config[2700]: No configuration found. Sep 13 10:15:01.181139 systemd[1]: Reloading finished in 375 ms. Sep 13 10:15:01.217908 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:15:01.245327 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 10:15:01.245680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:15:01.245768 systemd[1]: kubelet.service: Consumed 992ms CPU time, 130.7M memory peak. Sep 13 10:15:01.249471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:15:01.545989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:15:01.563146 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 10:15:01.613514 kubelet[2742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:15:01.613514 kubelet[2742]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 10:15:01.613514 kubelet[2742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:15:01.614085 kubelet[2742]: I0913 10:15:01.613892 2742 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 10:15:01.620311 kubelet[2742]: I0913 10:15:01.620263 2742 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 10:15:01.620311 kubelet[2742]: I0913 10:15:01.620298 2742 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 10:15:01.620607 kubelet[2742]: I0913 10:15:01.620582 2742 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 10:15:01.621914 kubelet[2742]: I0913 10:15:01.621881 2742 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 10:15:01.624176 kubelet[2742]: I0913 10:15:01.624147 2742 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:15:01.629038 kubelet[2742]: I0913 10:15:01.629003 2742 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 10:15:01.633990 kubelet[2742]: I0913 10:15:01.633954 2742 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 10:15:01.634089 kubelet[2742]: I0913 10:15:01.634075 2742 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 10:15:01.634230 kubelet[2742]: I0913 10:15:01.634196 2742 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 10:15:01.634389 kubelet[2742]: I0913 10:15:01.634225 2742 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 10:15:01.634482 kubelet[2742]: I0913 10:15:01.634403 2742 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 10:15:01.634482 kubelet[2742]: I0913 10:15:01.634412 2742 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 10:15:01.634482 kubelet[2742]: I0913 10:15:01.634439 2742 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:15:01.634562 kubelet[2742]: I0913 10:15:01.634538 2742 kubelet.go:408] "Attempting to sync node with API server" Sep 13 10:15:01.634562 kubelet[2742]: I0913 10:15:01.634548 2742 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 10:15:01.634609 kubelet[2742]: I0913 10:15:01.634579 2742 kubelet.go:314] "Adding apiserver pod source" Sep 13 10:15:01.634609 kubelet[2742]: I0913 10:15:01.634589 2742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 10:15:01.635281 kubelet[2742]: I0913 10:15:01.635227 2742 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 10:15:01.635610 kubelet[2742]: I0913 10:15:01.635593 2742 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 10:15:01.636792 kubelet[2742]: I0913 10:15:01.636009 2742 server.go:1274] "Started kubelet" Sep 13 10:15:01.636792 kubelet[2742]: I0913 10:15:01.636260 2742 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 10:15:01.636792 kubelet[2742]: I0913 10:15:01.636436 2742 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 10:15:01.636792 kubelet[2742]: I0913 10:15:01.636745 2742 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 10:15:01.638256 kubelet[2742]: I0913 10:15:01.638243 2742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 10:15:01.641611 kubelet[2742]: I0913 10:15:01.641568 2742 server.go:449] "Adding debug handlers to kubelet server" Sep 13 10:15:01.642492 kubelet[2742]: I0913 10:15:01.642463 2742 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 10:15:01.647162 kubelet[2742]: I0913 10:15:01.646791 2742 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 10:15:01.647162 kubelet[2742]: I0913 10:15:01.646888 2742 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 10:15:01.647162 kubelet[2742]: I0913 10:15:01.647039 2742 reconciler.go:26] "Reconciler: start to sync state" Sep 13 10:15:01.648296 kubelet[2742]: E0913 10:15:01.648250 2742 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:15:01.649847 kubelet[2742]: I0913 10:15:01.648484 2742 factory.go:221] Registration of the systemd container factory successfully Sep 13 10:15:01.650195 kubelet[2742]: I0913 10:15:01.650168 2742 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 10:15:01.651413 kubelet[2742]: E0913 10:15:01.651354 2742 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 10:15:01.653690 kubelet[2742]: I0913 10:15:01.653661 2742 factory.go:221] Registration of the containerd container factory successfully Sep 13 10:15:01.666209 kubelet[2742]: I0913 10:15:01.666157 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 10:15:01.668778 kubelet[2742]: I0913 10:15:01.667933 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 10:15:01.668778 kubelet[2742]: I0913 10:15:01.667963 2742 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 10:15:01.668778 kubelet[2742]: I0913 10:15:01.667986 2742 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 10:15:01.668778 kubelet[2742]: E0913 10:15:01.668048 2742 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 10:15:01.691778 kubelet[2742]: I0913 10:15:01.691738 2742 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 10:15:01.691778 kubelet[2742]: I0913 10:15:01.691772 2742 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 10:15:01.691943 kubelet[2742]: I0913 10:15:01.691798 2742 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:15:01.692017 kubelet[2742]: I0913 10:15:01.691975 2742 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 10:15:01.692017 kubelet[2742]: I0913 10:15:01.691993 2742 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 10:15:01.692017 kubelet[2742]: I0913 10:15:01.692012 2742 policy_none.go:49] "None policy: Start" Sep 13 10:15:01.692584 kubelet[2742]: I0913 10:15:01.692557 2742 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 10:15:01.692584 kubelet[2742]: I0913 10:15:01.692586 2742 state_mem.go:35] "Initializing new in-memory state store" Sep 13 10:15:01.692773 kubelet[2742]: I0913 10:15:01.692730 2742 state_mem.go:75] "Updated machine memory state" Sep 13 10:15:01.700415 kubelet[2742]: I0913 10:15:01.700383 2742 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 10:15:01.700619 kubelet[2742]: I0913 10:15:01.700604 2742 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 10:15:01.700663 kubelet[2742]: I0913 10:15:01.700621 2742 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 10:15:01.700878 kubelet[2742]: I0913 10:15:01.700864 2742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 10:15:01.776345 kubelet[2742]: E0913 10:15:01.776300 2742 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 10:15:01.805526 kubelet[2742]: I0913 10:15:01.805419 2742 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 10:15:01.812261 kubelet[2742]: I0913 10:15:01.812220 2742 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 10:15:01.812385 kubelet[2742]: I0913 10:15:01.812301 2742 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 10:15:01.818192 sudo[2780]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 10:15:01.818640 sudo[2780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 10:15:01.948454 kubelet[2742]: I0913 10:15:01.948379 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8947ac49c84da6ea61936d8da8386dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d8947ac49c84da6ea61936d8da8386dd\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:15:01.948454 kubelet[2742]: I0913 10:15:01.948431 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:15:01.948454 kubelet[2742]: I0913 10:15:01.948458 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:15:01.948454 kubelet[2742]: I0913 10:15:01.948475 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:15:01.948837 kubelet[2742]: I0913 10:15:01.948494 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:15:01.948837 kubelet[2742]: I0913 10:15:01.948511 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 10:15:01.948837 kubelet[2742]: I0913 10:15:01.948524 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8947ac49c84da6ea61936d8da8386dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d8947ac49c84da6ea61936d8da8386dd\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:15:01.948837 kubelet[2742]: I0913 10:15:01.948537 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8947ac49c84da6ea61936d8da8386dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d8947ac49c84da6ea61936d8da8386dd\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:15:01.948837 kubelet[2742]: I0913 10:15:01.948551 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:15:02.076439 kubelet[2742]: E0913 10:15:02.076296 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:02.076439 kubelet[2742]: E0913 10:15:02.076313 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:02.076622 kubelet[2742]: E0913 10:15:02.076533 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:02.256462 sudo[2780]: pam_unix(sudo:session): session closed for user root Sep 13 10:15:02.635038 kubelet[2742]: I0913 10:15:02.634979 2742 apiserver.go:52] "Watching apiserver" Sep 13 10:15:02.647402 kubelet[2742]: I0913 10:15:02.647352 2742 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 10:15:02.680515 kubelet[2742]: E0913 10:15:02.680478 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:02.681237 kubelet[2742]: E0913 10:15:02.681090 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:02.681237 kubelet[2742]: E0913 10:15:02.681113 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:02.708126 kubelet[2742]: I0913 10:15:02.708045 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.708008627 podStartE2EDuration="1.708008627s" podCreationTimestamp="2025-09-13 10:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:15:02.701214232 +0000 UTC m=+1.129096180" watchObservedRunningTime="2025-09-13 10:15:02.708008627 +0000 UTC m=+1.135890575" Sep 13 10:15:02.717045 kubelet[2742]: I0913 10:15:02.716725 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.7166974870000002 podStartE2EDuration="2.716697487s" podCreationTimestamp="2025-09-13 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:15:02.708008787 +0000 UTC m=+1.135890735" watchObservedRunningTime="2025-09-13 10:15:02.716697487 +0000 UTC m=+1.144579435" Sep 13 10:15:02.717287 kubelet[2742]: I0913 10:15:02.717133 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.71712434 podStartE2EDuration="1.71712434s" podCreationTimestamp="2025-09-13 10:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:15:02.717123658 +0000 UTC m=+1.145005606" watchObservedRunningTime="2025-09-13 10:15:02.71712434 +0000 UTC m=+1.145006288" Sep 13 10:15:03.682716 kubelet[2742]: E0913 10:15:03.682657 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:04.298149 sudo[1807]: pam_unix(sudo:session): session closed for user root Sep 13 10:15:04.299816 sshd[1806]: Connection closed by 10.0.0.1 port 35390 Sep 13 10:15:04.313498 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:04.318217 systemd[1]: sshd@8-10.0.0.20:22-10.0.0.1:35390.service: Deactivated successfully. Sep 13 10:15:04.321087 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 10:15:04.321338 systemd[1]: session-9.scope: Consumed 5.574s CPU time, 264.7M memory peak. Sep 13 10:15:04.322998 systemd-logind[1531]: Session 9 logged out. Waiting for processes to exit. Sep 13 10:15:04.324639 systemd-logind[1531]: Removed session 9. Sep 13 10:15:04.927317 kubelet[2742]: E0913 10:15:04.927278 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:05.967971 kubelet[2742]: I0913 10:15:05.967932 2742 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 10:15:05.968720 containerd[1559]: time="2025-09-13T10:15:05.968635036Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 10:15:05.969306 kubelet[2742]: I0913 10:15:05.968931 2742 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 10:15:06.024448 update_engine[1532]: I20250913 10:15:06.024346 1532 update_attempter.cc:509] Updating boot flags... Sep 13 10:15:06.815272 systemd[1]: Created slice kubepods-besteffort-pod51a67df9_3e67_4c32_9402_f2603c06feb9.slice - libcontainer container kubepods-besteffort-pod51a67df9_3e67_4c32_9402_f2603c06feb9.slice. Sep 13 10:15:06.834248 systemd[1]: Created slice kubepods-burstable-pod43217a5d_b542_4265_85a1_8b896b235eba.slice - libcontainer container kubepods-burstable-pod43217a5d_b542_4265_85a1_8b896b235eba.slice. Sep 13 10:15:06.977475 kubelet[2742]: I0913 10:15:06.977406 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-cilium-run\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.977475 kubelet[2742]: I0913 10:15:06.977471 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-host-proc-sys-net\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.977475 kubelet[2742]: I0913 10:15:06.977493 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43217a5d-b542-4265-85a1-8b896b235eba-hubble-tls\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.978060 kubelet[2742]: I0913 10:15:06.977516 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51a67df9-3e67-4c32-9402-f2603c06feb9-kube-proxy\") pod \"kube-proxy-pzmlk\" (UID: \"51a67df9-3e67-4c32-9402-f2603c06feb9\") " pod="kube-system/kube-proxy-pzmlk" Sep 13 10:15:06.978060 kubelet[2742]: I0913 10:15:06.977531 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-xtables-lock\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.978060 kubelet[2742]: I0913 10:15:06.977545 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51a67df9-3e67-4c32-9402-f2603c06feb9-lib-modules\") pod \"kube-proxy-pzmlk\" (UID: \"51a67df9-3e67-4c32-9402-f2603c06feb9\") " pod="kube-system/kube-proxy-pzmlk" Sep 13 10:15:06.978060 kubelet[2742]: I0913 10:15:06.977558 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-cilium-cgroup\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.978060 kubelet[2742]: I0913 10:15:06.977571 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-cni-path\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.978060 kubelet[2742]: I0913 10:15:06.977584 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43217a5d-b542-4265-85a1-8b896b235eba-clustermesh-secrets\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.978227 kubelet[2742]: I0913 10:15:06.977597 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43217a5d-b542-4265-85a1-8b896b235eba-cilium-config-path\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.978227 kubelet[2742]: I0913 10:15:06.977617 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s8l8\" (UniqueName: \"kubernetes.io/projected/43217a5d-b542-4265-85a1-8b896b235eba-kube-api-access-5s8l8\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.978227 kubelet[2742]: I0913 10:15:06.977633 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51a67df9-3e67-4c32-9402-f2603c06feb9-xtables-lock\") pod \"kube-proxy-pzmlk\" (UID: \"51a67df9-3e67-4c32-9402-f2603c06feb9\") " pod="kube-system/kube-proxy-pzmlk" Sep 13 10:15:06.978227 kubelet[2742]: I0913 10:15:06.977653 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-bpf-maps\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.978227 kubelet[2742]: I0913 10:15:06.977666 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djqmb\" (UniqueName: \"kubernetes.io/projected/51a67df9-3e67-4c32-9402-f2603c06feb9-kube-api-access-djqmb\") pod \"kube-proxy-pzmlk\" (UID: \"51a67df9-3e67-4c32-9402-f2603c06feb9\") " pod="kube-system/kube-proxy-pzmlk" Sep 13 10:15:06.978338 kubelet[2742]: I0913 10:15:06.977683 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-hostproc\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.978338 kubelet[2742]: I0913 10:15:06.977697 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-etc-cni-netd\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.978338 kubelet[2742]: I0913 10:15:06.977711 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-lib-modules\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.978338 kubelet[2742]: I0913 10:15:06.977725 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-host-proc-sys-kernel\") pod \"cilium-drfl7\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " pod="kube-system/cilium-drfl7" Sep 13 10:15:06.988593 systemd[1]: Created slice kubepods-besteffort-pod90a12d8c_758c_4b9f_b3c1_f70ae6adb997.slice - libcontainer container kubepods-besteffort-pod90a12d8c_758c_4b9f_b3c1_f70ae6adb997.slice. Sep 13 10:15:07.078935 kubelet[2742]: I0913 10:15:07.078173 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90a12d8c-758c-4b9f-b3c1-f70ae6adb997-cilium-config-path\") pod \"cilium-operator-5d85765b45-89hzv\" (UID: \"90a12d8c-758c-4b9f-b3c1-f70ae6adb997\") " pod="kube-system/cilium-operator-5d85765b45-89hzv" Sep 13 10:15:07.078935 kubelet[2742]: I0913 10:15:07.078347 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gjtn\" (UniqueName: \"kubernetes.io/projected/90a12d8c-758c-4b9f-b3c1-f70ae6adb997-kube-api-access-8gjtn\") pod \"cilium-operator-5d85765b45-89hzv\" (UID: \"90a12d8c-758c-4b9f-b3c1-f70ae6adb997\") " pod="kube-system/cilium-operator-5d85765b45-89hzv" Sep 13 10:15:07.129696 kubelet[2742]: E0913 10:15:07.129625 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:07.130545 containerd[1559]: time="2025-09-13T10:15:07.130481341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzmlk,Uid:51a67df9-3e67-4c32-9402-f2603c06feb9,Namespace:kube-system,Attempt:0,}" Sep 13 10:15:07.140228 kubelet[2742]: E0913 10:15:07.140194 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:07.140763 containerd[1559]: time="2025-09-13T10:15:07.140694783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-drfl7,Uid:43217a5d-b542-4265-85a1-8b896b235eba,Namespace:kube-system,Attempt:0,}" Sep 13 10:15:07.162785 containerd[1559]: time="2025-09-13T10:15:07.162415823Z" level=info msg="connecting to shim 75a384735b3ea447557b0174260d8288fd377930770130aaa9cb1ae5bd10e776" address="unix:///run/containerd/s/d467105b99cdc2ae269170b804443c0dad400c1240cf51a0d1d4dfdb7b7dfa54" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:15:07.167122 containerd[1559]: time="2025-09-13T10:15:07.167082330Z" level=info msg="connecting to shim 671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0" address="unix:///run/containerd/s/ec54fec3e6ae16af6842fc7ff124c919524d9281e49a191e1a6abdc4dee3cec3" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:15:07.209032 systemd[1]: Started cri-containerd-75a384735b3ea447557b0174260d8288fd377930770130aaa9cb1ae5bd10e776.scope - libcontainer container 75a384735b3ea447557b0174260d8288fd377930770130aaa9cb1ae5bd10e776. Sep 13 10:15:07.213663 systemd[1]: Started cri-containerd-671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0.scope - libcontainer container 671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0. Sep 13 10:15:07.292345 kubelet[2742]: E0913 10:15:07.292277 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:07.293064 containerd[1559]: time="2025-09-13T10:15:07.292999576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-89hzv,Uid:90a12d8c-758c-4b9f-b3c1-f70ae6adb997,Namespace:kube-system,Attempt:0,}" Sep 13 10:15:07.359817 containerd[1559]: time="2025-09-13T10:15:07.359634574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzmlk,Uid:51a67df9-3e67-4c32-9402-f2603c06feb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"75a384735b3ea447557b0174260d8288fd377930770130aaa9cb1ae5bd10e776\"" Sep 13 10:15:07.361045 kubelet[2742]: E0913 10:15:07.361012 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:07.362867 containerd[1559]: time="2025-09-13T10:15:07.361198499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-drfl7,Uid:43217a5d-b542-4265-85a1-8b896b235eba,Namespace:kube-system,Attempt:0,} returns sandbox id \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\"" Sep 13 10:15:07.365780 kubelet[2742]: E0913 10:15:07.365696 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:07.368107 containerd[1559]: time="2025-09-13T10:15:07.368039077Z" level=info msg="CreateContainer within sandbox \"75a384735b3ea447557b0174260d8288fd377930770130aaa9cb1ae5bd10e776\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 10:15:07.371363 containerd[1559]: time="2025-09-13T10:15:07.371305660Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 10:15:07.387475 containerd[1559]: time="2025-09-13T10:15:07.387417265Z" level=info msg="Container 381c63c030e104306be123169633febaa469e3d92b6adc2aaf5a0e621d98e163: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:07.397698 containerd[1559]: time="2025-09-13T10:15:07.397508235Z" level=info msg="connecting to shim 3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d" address="unix:///run/containerd/s/c306970fda168a5e2a044553f1db7bfd88cfb543daa3332659ce06928f95bc39" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:15:07.398928 containerd[1559]: time="2025-09-13T10:15:07.398867543Z" level=info msg="CreateContainer within sandbox \"75a384735b3ea447557b0174260d8288fd377930770130aaa9cb1ae5bd10e776\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"381c63c030e104306be123169633febaa469e3d92b6adc2aaf5a0e621d98e163\"" Sep 13 10:15:07.400918 containerd[1559]: time="2025-09-13T10:15:07.400888394Z" level=info msg="StartContainer for \"381c63c030e104306be123169633febaa469e3d92b6adc2aaf5a0e621d98e163\"" Sep 13 10:15:07.403451 containerd[1559]: time="2025-09-13T10:15:07.403324492Z" level=info msg="connecting to shim 381c63c030e104306be123169633febaa469e3d92b6adc2aaf5a0e621d98e163" address="unix:///run/containerd/s/d467105b99cdc2ae269170b804443c0dad400c1240cf51a0d1d4dfdb7b7dfa54" protocol=ttrpc version=3 Sep 13 10:15:07.437907 systemd[1]: Started cri-containerd-381c63c030e104306be123169633febaa469e3d92b6adc2aaf5a0e621d98e163.scope - libcontainer container 381c63c030e104306be123169633febaa469e3d92b6adc2aaf5a0e621d98e163. Sep 13 10:15:07.439319 systemd[1]: Started cri-containerd-3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d.scope - libcontainer container 3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d. Sep 13 10:15:07.493437 containerd[1559]: time="2025-09-13T10:15:07.493389329Z" level=info msg="StartContainer for \"381c63c030e104306be123169633febaa469e3d92b6adc2aaf5a0e621d98e163\" returns successfully" Sep 13 10:15:07.494466 containerd[1559]: time="2025-09-13T10:15:07.494341635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-89hzv,Uid:90a12d8c-758c-4b9f-b3c1-f70ae6adb997,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d\"" Sep 13 10:15:07.496292 kubelet[2742]: E0913 10:15:07.496263 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:07.818675 kubelet[2742]: E0913 10:15:07.818627 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:10.764094 kubelet[2742]: E0913 10:15:10.763920 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:10.777023 kubelet[2742]: I0913 10:15:10.776910 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pzmlk" podStartSLOduration=4.776873583 podStartE2EDuration="4.776873583s" podCreationTimestamp="2025-09-13 10:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:15:07.916260417 +0000 UTC m=+6.344142365" watchObservedRunningTime="2025-09-13 10:15:10.776873583 +0000 UTC m=+9.204755521" Sep 13 10:15:10.829623 kubelet[2742]: E0913 10:15:10.829219 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:11.685141 kubelet[2742]: E0913 10:15:11.685093 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:11.830299 kubelet[2742]: E0913 10:15:11.830250 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:12.831197 kubelet[2742]: E0913 10:15:12.831163 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:14.932335 kubelet[2742]: E0913 10:15:14.932289 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:15.017440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207753146.mount: Deactivated successfully. Sep 13 10:15:17.981791 containerd[1559]: time="2025-09-13T10:15:17.981676715Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:15:17.982525 containerd[1559]: time="2025-09-13T10:15:17.982495560Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 13 10:15:17.983943 containerd[1559]: time="2025-09-13T10:15:17.983905369Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:15:17.986095 containerd[1559]: time="2025-09-13T10:15:17.986056326Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.614694158s" Sep 13 10:15:17.986152 containerd[1559]: time="2025-09-13T10:15:17.986101772Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 10:15:17.987233 containerd[1559]: time="2025-09-13T10:15:17.987192970Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 10:15:17.988235 containerd[1559]: time="2025-09-13T10:15:17.988181795Z" level=info msg="CreateContainer within sandbox \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 10:15:17.998579 containerd[1559]: time="2025-09-13T10:15:17.998530123Z" level=info msg="Container e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:18.009270 containerd[1559]: time="2025-09-13T10:15:18.009215581Z" level=info msg="CreateContainer within sandbox \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\"" Sep 13 10:15:18.009867 containerd[1559]: time="2025-09-13T10:15:18.009832203Z" level=info msg="StartContainer for \"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\"" Sep 13 10:15:18.010650 containerd[1559]: time="2025-09-13T10:15:18.010609880Z" level=info msg="connecting to shim e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc" address="unix:///run/containerd/s/ec54fec3e6ae16af6842fc7ff124c919524d9281e49a191e1a6abdc4dee3cec3" protocol=ttrpc version=3 Sep 13 10:15:18.076073 systemd[1]: Started cri-containerd-e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc.scope - libcontainer container e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc. Sep 13 10:15:18.113835 containerd[1559]: time="2025-09-13T10:15:18.113773356Z" level=info msg="StartContainer for \"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\" returns successfully" Sep 13 10:15:18.128279 systemd[1]: cri-containerd-e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc.scope: Deactivated successfully. Sep 13 10:15:18.130117 containerd[1559]: time="2025-09-13T10:15:18.130078518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\" id:\"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\" pid:3181 exited_at:{seconds:1757758518 nanos:129434453}" Sep 13 10:15:18.130237 containerd[1559]: time="2025-09-13T10:15:18.130183917Z" level=info msg="received exit event container_id:\"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\" id:\"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\" pid:3181 exited_at:{seconds:1757758518 nanos:129434453}" Sep 13 10:15:18.154116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc-rootfs.mount: Deactivated successfully. Sep 13 10:15:18.843632 kubelet[2742]: E0913 10:15:18.843586 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:19.846216 kubelet[2742]: E0913 10:15:19.846146 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:19.848403 containerd[1559]: time="2025-09-13T10:15:19.847922278Z" level=info msg="CreateContainer within sandbox \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 10:15:20.271034 containerd[1559]: time="2025-09-13T10:15:20.270981266Z" level=info msg="Container 8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:20.275825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931857322.mount: Deactivated successfully. Sep 13 10:15:20.285473 containerd[1559]: time="2025-09-13T10:15:20.285424350Z" level=info msg="CreateContainer within sandbox \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\"" Sep 13 10:15:20.286169 containerd[1559]: time="2025-09-13T10:15:20.286117706Z" level=info msg="StartContainer for \"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\"" Sep 13 10:15:20.287024 containerd[1559]: time="2025-09-13T10:15:20.286998736Z" level=info msg="connecting to shim 8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85" address="unix:///run/containerd/s/ec54fec3e6ae16af6842fc7ff124c919524d9281e49a191e1a6abdc4dee3cec3" protocol=ttrpc version=3 Sep 13 10:15:20.314942 systemd[1]: Started cri-containerd-8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85.scope - libcontainer container 8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85. Sep 13 10:15:20.357200 containerd[1559]: time="2025-09-13T10:15:20.357149174Z" level=info msg="StartContainer for \"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\" returns successfully" Sep 13 10:15:20.371326 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 10:15:20.371738 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:15:20.374602 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:15:20.376437 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:15:20.378465 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 10:15:20.379904 systemd[1]: cri-containerd-8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85.scope: Deactivated successfully. Sep 13 10:15:20.380553 containerd[1559]: time="2025-09-13T10:15:20.380494251Z" level=info msg="received exit event container_id:\"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\" id:\"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\" pid:3226 exited_at:{seconds:1757758520 nanos:379451926}" Sep 13 10:15:20.381022 containerd[1559]: time="2025-09-13T10:15:20.380951042Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\" id:\"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\" pid:3226 exited_at:{seconds:1757758520 nanos:379451926}" Sep 13 10:15:20.415318 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:15:20.850566 kubelet[2742]: E0913 10:15:20.850521 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:20.854067 containerd[1559]: time="2025-09-13T10:15:20.854007625Z" level=info msg="CreateContainer within sandbox \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 10:15:20.865669 containerd[1559]: time="2025-09-13T10:15:20.865482937Z" level=info msg="Container 56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:20.878450 containerd[1559]: time="2025-09-13T10:15:20.878393743Z" level=info msg="CreateContainer within sandbox \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\"" Sep 13 10:15:20.879146 containerd[1559]: time="2025-09-13T10:15:20.879096647Z" level=info msg="StartContainer for \"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\"" Sep 13 10:15:20.881337 containerd[1559]: time="2025-09-13T10:15:20.881305800Z" level=info msg="connecting to shim 56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013" address="unix:///run/containerd/s/ec54fec3e6ae16af6842fc7ff124c919524d9281e49a191e1a6abdc4dee3cec3" protocol=ttrpc version=3 Sep 13 10:15:20.905931 systemd[1]: Started cri-containerd-56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013.scope - libcontainer container 56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013. Sep 13 10:15:20.956457 systemd[1]: cri-containerd-56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013.scope: Deactivated successfully. Sep 13 10:15:20.959102 containerd[1559]: time="2025-09-13T10:15:20.959046680Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\" id:\"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\" pid:3287 exited_at:{seconds:1757758520 nanos:958655512}" Sep 13 10:15:21.103467 containerd[1559]: time="2025-09-13T10:15:21.103330522Z" level=info msg="received exit event container_id:\"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\" id:\"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\" pid:3287 exited_at:{seconds:1757758520 nanos:958655512}" Sep 13 10:15:21.106208 containerd[1559]: time="2025-09-13T10:15:21.106131578Z" level=info msg="StartContainer for \"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\" returns successfully" Sep 13 10:15:21.272197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85-rootfs.mount: Deactivated successfully. Sep 13 10:15:21.470000 containerd[1559]: time="2025-09-13T10:15:21.469913886Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:15:21.470620 containerd[1559]: time="2025-09-13T10:15:21.470557779Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 13 10:15:21.471862 containerd[1559]: time="2025-09-13T10:15:21.471805019Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:15:21.473284 containerd[1559]: time="2025-09-13T10:15:21.473249981Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.486011304s" Sep 13 10:15:21.473334 containerd[1559]: time="2025-09-13T10:15:21.473290567Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 10:15:21.475832 containerd[1559]: time="2025-09-13T10:15:21.475774585Z" level=info msg="CreateContainer within sandbox \"3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 10:15:21.855529 kubelet[2742]: E0913 10:15:21.855405 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:21.858028 containerd[1559]: time="2025-09-13T10:15:21.857975948Z" level=info msg="CreateContainer within sandbox \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 10:15:21.924160 containerd[1559]: time="2025-09-13T10:15:21.924088180Z" level=info msg="Container fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:21.928262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3566953648.mount: Deactivated successfully. Sep 13 10:15:21.933474 containerd[1559]: time="2025-09-13T10:15:21.933425818Z" level=info msg="CreateContainer within sandbox \"3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\"" Sep 13 10:15:21.934243 containerd[1559]: time="2025-09-13T10:15:21.934189007Z" level=info msg="StartContainer for \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\"" Sep 13 10:15:21.935898 containerd[1559]: time="2025-09-13T10:15:21.935552555Z" level=info msg="connecting to shim fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92" address="unix:///run/containerd/s/c306970fda168a5e2a044553f1db7bfd88cfb543daa3332659ce06928f95bc39" protocol=ttrpc version=3 Sep 13 10:15:21.936494 containerd[1559]: time="2025-09-13T10:15:21.936460015Z" level=info msg="Container e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:21.942264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3452443836.mount: Deactivated successfully. Sep 13 10:15:21.947620 containerd[1559]: time="2025-09-13T10:15:21.947566745Z" level=info msg="CreateContainer within sandbox \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\"" Sep 13 10:15:21.948259 containerd[1559]: time="2025-09-13T10:15:21.948231006Z" level=info msg="StartContainer for \"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\"" Sep 13 10:15:21.949308 containerd[1559]: time="2025-09-13T10:15:21.949271406Z" level=info msg="connecting to shim e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a" address="unix:///run/containerd/s/ec54fec3e6ae16af6842fc7ff124c919524d9281e49a191e1a6abdc4dee3cec3" protocol=ttrpc version=3 Sep 13 10:15:21.963952 systemd[1]: Started cri-containerd-fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92.scope - libcontainer container fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92. Sep 13 10:15:21.982982 systemd[1]: Started cri-containerd-e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a.scope - libcontainer container e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a. Sep 13 10:15:22.012899 containerd[1559]: time="2025-09-13T10:15:22.012835723Z" level=info msg="StartContainer for \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\" returns successfully" Sep 13 10:15:22.020226 systemd[1]: cri-containerd-e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a.scope: Deactivated successfully. Sep 13 10:15:22.021063 containerd[1559]: time="2025-09-13T10:15:22.020618549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\" id:\"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\" pid:3348 exited_at:{seconds:1757758522 nanos:20395870}" Sep 13 10:15:22.023771 containerd[1559]: time="2025-09-13T10:15:22.023735048Z" level=info msg="received exit event container_id:\"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\" id:\"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\" pid:3348 exited_at:{seconds:1757758522 nanos:20395870}" Sep 13 10:15:22.026019 containerd[1559]: time="2025-09-13T10:15:22.025985085Z" level=info msg="StartContainer for \"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\" returns successfully" Sep 13 10:15:22.859848 kubelet[2742]: E0913 10:15:22.859692 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:22.864816 kubelet[2742]: E0913 10:15:22.864785 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:22.866866 containerd[1559]: time="2025-09-13T10:15:22.866819034Z" level=info msg="CreateContainer within sandbox \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 10:15:23.445790 containerd[1559]: time="2025-09-13T10:15:23.443660325Z" level=info msg="Container 74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:23.447620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount341119201.mount: Deactivated successfully. Sep 13 10:15:23.528854 kubelet[2742]: I0913 10:15:23.528789 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-89hzv" podStartSLOduration=3.551475762 podStartE2EDuration="17.528769957s" podCreationTimestamp="2025-09-13 10:15:06 +0000 UTC" firstStartedPulling="2025-09-13 10:15:07.496849699 +0000 UTC m=+5.924731647" lastFinishedPulling="2025-09-13 10:15:21.474143894 +0000 UTC m=+19.902025842" observedRunningTime="2025-09-13 10:15:23.528651885 +0000 UTC m=+21.956533833" watchObservedRunningTime="2025-09-13 10:15:23.528769957 +0000 UTC m=+21.956651905" Sep 13 10:15:23.715072 containerd[1559]: time="2025-09-13T10:15:23.714937304Z" level=info msg="CreateContainer within sandbox \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\"" Sep 13 10:15:23.715418 containerd[1559]: time="2025-09-13T10:15:23.715386559Z" level=info msg="StartContainer for \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\"" Sep 13 10:15:23.716451 containerd[1559]: time="2025-09-13T10:15:23.716420315Z" level=info msg="connecting to shim 74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8" address="unix:///run/containerd/s/ec54fec3e6ae16af6842fc7ff124c919524d9281e49a191e1a6abdc4dee3cec3" protocol=ttrpc version=3 Sep 13 10:15:23.754025 systemd[1]: Started cri-containerd-74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8.scope - libcontainer container 74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8. Sep 13 10:15:23.801690 containerd[1559]: time="2025-09-13T10:15:23.801638731Z" level=info msg="StartContainer for \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" returns successfully" Sep 13 10:15:23.872823 kubelet[2742]: E0913 10:15:23.872725 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:23.892502 containerd[1559]: time="2025-09-13T10:15:23.892362963Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" id:\"6df6b48cf4ae587d7ceb940c13190f9702cd6f9aeb1a7f63b5914a08737d92d6\" pid:3432 exited_at:{seconds:1757758523 nanos:892023002}" Sep 13 10:15:23.955575 kubelet[2742]: I0913 10:15:23.955536 2742 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 10:15:24.010133 systemd[1]: Created slice kubepods-burstable-poded5bf145_bbf2_49e9_a27d_5966c0d01e2e.slice - libcontainer container kubepods-burstable-poded5bf145_bbf2_49e9_a27d_5966c0d01e2e.slice. Sep 13 10:15:24.023098 systemd[1]: Created slice kubepods-burstable-pod9d69f580_d071_4777_8de0_d70e2ba18c6d.slice - libcontainer container kubepods-burstable-pod9d69f580_d071_4777_8de0_d70e2ba18c6d.slice. Sep 13 10:15:24.190980 kubelet[2742]: I0913 10:15:24.190917 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l64k\" (UniqueName: \"kubernetes.io/projected/9d69f580-d071-4777-8de0-d70e2ba18c6d-kube-api-access-5l64k\") pod \"coredns-7c65d6cfc9-9wkb7\" (UID: \"9d69f580-d071-4777-8de0-d70e2ba18c6d\") " pod="kube-system/coredns-7c65d6cfc9-9wkb7" Sep 13 10:15:24.190980 kubelet[2742]: I0913 10:15:24.190963 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed5bf145-bbf2-49e9-a27d-5966c0d01e2e-config-volume\") pod \"coredns-7c65d6cfc9-gh4hj\" (UID: \"ed5bf145-bbf2-49e9-a27d-5966c0d01e2e\") " pod="kube-system/coredns-7c65d6cfc9-gh4hj" Sep 13 10:15:24.191187 kubelet[2742]: I0913 10:15:24.190986 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d69f580-d071-4777-8de0-d70e2ba18c6d-config-volume\") pod \"coredns-7c65d6cfc9-9wkb7\" (UID: \"9d69f580-d071-4777-8de0-d70e2ba18c6d\") " pod="kube-system/coredns-7c65d6cfc9-9wkb7" Sep 13 10:15:24.191187 kubelet[2742]: I0913 10:15:24.191019 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8kxw\" (UniqueName: \"kubernetes.io/projected/ed5bf145-bbf2-49e9-a27d-5966c0d01e2e-kube-api-access-p8kxw\") pod \"coredns-7c65d6cfc9-gh4hj\" (UID: \"ed5bf145-bbf2-49e9-a27d-5966c0d01e2e\") " pod="kube-system/coredns-7c65d6cfc9-gh4hj" Sep 13 10:15:24.329291 kubelet[2742]: E0913 10:15:24.329046 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:24.336527 containerd[1559]: time="2025-09-13T10:15:24.336471178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9wkb7,Uid:9d69f580-d071-4777-8de0-d70e2ba18c6d,Namespace:kube-system,Attempt:0,}" Sep 13 10:15:24.616610 kubelet[2742]: E0913 10:15:24.616558 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:24.618470 containerd[1559]: time="2025-09-13T10:15:24.618393348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gh4hj,Uid:ed5bf145-bbf2-49e9-a27d-5966c0d01e2e,Namespace:kube-system,Attempt:0,}" Sep 13 10:15:24.874615 kubelet[2742]: E0913 10:15:24.874431 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:25.876400 kubelet[2742]: E0913 10:15:25.876351 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:26.177500 systemd-networkd[1491]: cilium_host: Link UP Sep 13 10:15:26.178277 systemd-networkd[1491]: cilium_net: Link UP Sep 13 10:15:26.178560 systemd-networkd[1491]: cilium_net: Gained carrier Sep 13 10:15:26.179005 systemd-networkd[1491]: cilium_host: Gained carrier Sep 13 10:15:26.213929 systemd-networkd[1491]: cilium_host: Gained IPv6LL Sep 13 10:15:26.295713 systemd-networkd[1491]: cilium_vxlan: Link UP Sep 13 10:15:26.295727 systemd-networkd[1491]: cilium_vxlan: Gained carrier Sep 13 10:15:26.509804 kernel: NET: Registered PF_ALG protocol family Sep 13 10:15:26.875987 systemd-networkd[1491]: cilium_net: Gained IPv6LL Sep 13 10:15:27.180707 systemd-networkd[1491]: lxc_health: Link UP Sep 13 10:15:27.181028 systemd-networkd[1491]: lxc_health: Gained carrier Sep 13 10:15:27.379831 systemd-networkd[1491]: lxce0be63a9572e: Link UP Sep 13 10:15:27.380946 kernel: eth0: renamed from tmp78d06 Sep 13 10:15:27.381747 systemd-networkd[1491]: lxce0be63a9572e: Gained carrier Sep 13 10:15:27.644915 systemd-networkd[1491]: cilium_vxlan: Gained IPv6LL Sep 13 10:15:27.656191 systemd-networkd[1491]: lxcde8577fd0943: Link UP Sep 13 10:15:27.657849 kernel: eth0: renamed from tmp5df3d Sep 13 10:15:27.659785 systemd-networkd[1491]: lxcde8577fd0943: Gained carrier Sep 13 10:15:28.667981 systemd-networkd[1491]: lxc_health: Gained IPv6LL Sep 13 10:15:28.923983 systemd-networkd[1491]: lxcde8577fd0943: Gained IPv6LL Sep 13 10:15:29.053747 systemd-networkd[1491]: lxce0be63a9572e: Gained IPv6LL Sep 13 10:15:29.142099 kubelet[2742]: E0913 10:15:29.142048 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:29.188966 kubelet[2742]: I0913 10:15:29.188734 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-drfl7" podStartSLOduration=12.572108703 podStartE2EDuration="23.188707153s" podCreationTimestamp="2025-09-13 10:15:06 +0000 UTC" firstStartedPulling="2025-09-13 10:15:07.370343877 +0000 UTC m=+5.798225835" lastFinishedPulling="2025-09-13 10:15:17.986942337 +0000 UTC m=+16.414824285" observedRunningTime="2025-09-13 10:15:24.899879577 +0000 UTC m=+23.327761515" watchObservedRunningTime="2025-09-13 10:15:29.188707153 +0000 UTC m=+27.616589131" Sep 13 10:15:29.889991 kubelet[2742]: E0913 10:15:29.889950 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:30.892829 kubelet[2742]: E0913 10:15:30.892787 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:31.040506 containerd[1559]: time="2025-09-13T10:15:31.040438628Z" level=info msg="connecting to shim 5df3dc9652a10f84da1b9309b7f42ceb1fd42d1a7a665b818a2856d19b21b85c" address="unix:///run/containerd/s/8d157c7b8d4b986b9a212969f39e37c65a6b99e59ec44e8314e3e3fddcb28193" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:15:31.041905 containerd[1559]: time="2025-09-13T10:15:31.041872985Z" level=info msg="connecting to shim 78d0687c6045d3de1d83dbe92a7c12f6a1b2ee9282bcbac3957280d66a262dd6" address="unix:///run/containerd/s/6194441f688f7bfdce6bf8fc9ecc9d1b82e5ba9576c049f6514211107911184c" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:15:31.073010 systemd[1]: Started cri-containerd-5df3dc9652a10f84da1b9309b7f42ceb1fd42d1a7a665b818a2856d19b21b85c.scope - libcontainer container 5df3dc9652a10f84da1b9309b7f42ceb1fd42d1a7a665b818a2856d19b21b85c. Sep 13 10:15:31.076854 systemd[1]: Started cri-containerd-78d0687c6045d3de1d83dbe92a7c12f6a1b2ee9282bcbac3957280d66a262dd6.scope - libcontainer container 78d0687c6045d3de1d83dbe92a7c12f6a1b2ee9282bcbac3957280d66a262dd6. Sep 13 10:15:31.093097 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:15:31.094600 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:15:31.139019 containerd[1559]: time="2025-09-13T10:15:31.138967193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9wkb7,Uid:9d69f580-d071-4777-8de0-d70e2ba18c6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"78d0687c6045d3de1d83dbe92a7c12f6a1b2ee9282bcbac3957280d66a262dd6\"" Sep 13 10:15:31.139223 containerd[1559]: time="2025-09-13T10:15:31.139061971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gh4hj,Uid:ed5bf145-bbf2-49e9-a27d-5966c0d01e2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5df3dc9652a10f84da1b9309b7f42ceb1fd42d1a7a665b818a2856d19b21b85c\"" Sep 13 10:15:31.142980 kubelet[2742]: E0913 10:15:31.142854 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:31.143512 kubelet[2742]: E0913 10:15:31.143469 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:31.144919 containerd[1559]: time="2025-09-13T10:15:31.144885617Z" level=info msg="CreateContainer within sandbox \"5df3dc9652a10f84da1b9309b7f42ceb1fd42d1a7a665b818a2856d19b21b85c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 10:15:31.145439 containerd[1559]: time="2025-09-13T10:15:31.145245635Z" level=info msg="CreateContainer within sandbox \"78d0687c6045d3de1d83dbe92a7c12f6a1b2ee9282bcbac3957280d66a262dd6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 10:15:31.162139 containerd[1559]: time="2025-09-13T10:15:31.162080990Z" level=info msg="Container 14333f3e059158eaef7beb6dbbd8d10502193c0679a95f6ad92d41db3e6c7cfe: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:31.164090 containerd[1559]: time="2025-09-13T10:15:31.164048879Z" level=info msg="Container 95d3792cb4eee880509665ca1a58dfed6c30e8c6d0291c1bdb22fe43302e0599: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:31.169116 containerd[1559]: time="2025-09-13T10:15:31.169069396Z" level=info msg="CreateContainer within sandbox \"5df3dc9652a10f84da1b9309b7f42ceb1fd42d1a7a665b818a2856d19b21b85c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14333f3e059158eaef7beb6dbbd8d10502193c0679a95f6ad92d41db3e6c7cfe\"" Sep 13 10:15:31.170019 containerd[1559]: time="2025-09-13T10:15:31.169837961Z" level=info msg="StartContainer for \"14333f3e059158eaef7beb6dbbd8d10502193c0679a95f6ad92d41db3e6c7cfe\"" Sep 13 10:15:31.171218 containerd[1559]: time="2025-09-13T10:15:31.171176307Z" level=info msg="connecting to shim 14333f3e059158eaef7beb6dbbd8d10502193c0679a95f6ad92d41db3e6c7cfe" address="unix:///run/containerd/s/8d157c7b8d4b986b9a212969f39e37c65a6b99e59ec44e8314e3e3fddcb28193" protocol=ttrpc version=3 Sep 13 10:15:31.174860 containerd[1559]: time="2025-09-13T10:15:31.174819876Z" level=info msg="CreateContainer within sandbox \"78d0687c6045d3de1d83dbe92a7c12f6a1b2ee9282bcbac3957280d66a262dd6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95d3792cb4eee880509665ca1a58dfed6c30e8c6d0291c1bdb22fe43302e0599\"" Sep 13 10:15:31.175471 containerd[1559]: time="2025-09-13T10:15:31.175432506Z" level=info msg="StartContainer for \"95d3792cb4eee880509665ca1a58dfed6c30e8c6d0291c1bdb22fe43302e0599\"" Sep 13 10:15:31.176231 containerd[1559]: time="2025-09-13T10:15:31.176209538Z" level=info msg="connecting to shim 95d3792cb4eee880509665ca1a58dfed6c30e8c6d0291c1bdb22fe43302e0599" address="unix:///run/containerd/s/6194441f688f7bfdce6bf8fc9ecc9d1b82e5ba9576c049f6514211107911184c" protocol=ttrpc version=3 Sep 13 10:15:31.201952 systemd[1]: Started cri-containerd-14333f3e059158eaef7beb6dbbd8d10502193c0679a95f6ad92d41db3e6c7cfe.scope - libcontainer container 14333f3e059158eaef7beb6dbbd8d10502193c0679a95f6ad92d41db3e6c7cfe. Sep 13 10:15:31.206066 systemd[1]: Started cri-containerd-95d3792cb4eee880509665ca1a58dfed6c30e8c6d0291c1bdb22fe43302e0599.scope - libcontainer container 95d3792cb4eee880509665ca1a58dfed6c30e8c6d0291c1bdb22fe43302e0599. Sep 13 10:15:31.251085 containerd[1559]: time="2025-09-13T10:15:31.250947843Z" level=info msg="StartContainer for \"95d3792cb4eee880509665ca1a58dfed6c30e8c6d0291c1bdb22fe43302e0599\" returns successfully" Sep 13 10:15:31.251471 containerd[1559]: time="2025-09-13T10:15:31.251304404Z" level=info msg="StartContainer for \"14333f3e059158eaef7beb6dbbd8d10502193c0679a95f6ad92d41db3e6c7cfe\" returns successfully" Sep 13 10:15:31.906001 kubelet[2742]: E0913 10:15:31.905969 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:31.908702 kubelet[2742]: E0913 10:15:31.908681 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:32.090235 kubelet[2742]: I0913 10:15:32.089948 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gh4hj" podStartSLOduration=26.089924999 podStartE2EDuration="26.089924999s" podCreationTimestamp="2025-09-13 10:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:15:32.089520349 +0000 UTC m=+30.517402297" watchObservedRunningTime="2025-09-13 10:15:32.089924999 +0000 UTC m=+30.517806947" Sep 13 10:15:32.272836 kubelet[2742]: I0913 10:15:32.272617 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9wkb7" podStartSLOduration=26.272597289 podStartE2EDuration="26.272597289s" podCreationTimestamp="2025-09-13 10:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:15:32.272138046 +0000 UTC m=+30.700019994" watchObservedRunningTime="2025-09-13 10:15:32.272597289 +0000 UTC m=+30.700479237" Sep 13 10:15:32.911175 kubelet[2742]: E0913 10:15:32.910917 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:32.911175 kubelet[2742]: E0913 10:15:32.911076 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:33.549629 systemd[1]: Started sshd@9-10.0.0.20:22-10.0.0.1:54774.service - OpenSSH per-connection server daemon (10.0.0.1:54774). Sep 13 10:15:33.608543 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 54774 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:15:33.610447 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:33.615708 systemd-logind[1531]: New session 10 of user core. Sep 13 10:15:33.626892 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 10:15:33.765233 sshd[4091]: Connection closed by 10.0.0.1 port 54774 Sep 13 10:15:33.765630 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:33.771014 systemd[1]: sshd@9-10.0.0.20:22-10.0.0.1:54774.service: Deactivated successfully. Sep 13 10:15:33.773902 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 10:15:33.775082 systemd-logind[1531]: Session 10 logged out. Waiting for processes to exit. Sep 13 10:15:33.777045 systemd-logind[1531]: Removed session 10. Sep 13 10:15:33.912443 kubelet[2742]: E0913 10:15:33.912399 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:33.912938 kubelet[2742]: E0913 10:15:33.912552 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:38.779879 systemd[1]: Started sshd@10-10.0.0.20:22-10.0.0.1:54780.service - OpenSSH per-connection server daemon (10.0.0.1:54780). Sep 13 10:15:38.831677 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 54780 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:15:38.833228 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:38.838203 systemd-logind[1531]: New session 11 of user core. Sep 13 10:15:38.852103 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 10:15:38.972515 sshd[4110]: Connection closed by 10.0.0.1 port 54780 Sep 13 10:15:38.972903 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:38.978677 systemd[1]: sshd@10-10.0.0.20:22-10.0.0.1:54780.service: Deactivated successfully. Sep 13 10:15:38.981622 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 10:15:38.982775 systemd-logind[1531]: Session 11 logged out. Waiting for processes to exit. Sep 13 10:15:38.985300 systemd-logind[1531]: Removed session 11. Sep 13 10:15:43.985559 systemd[1]: Started sshd@11-10.0.0.20:22-10.0.0.1:41938.service - OpenSSH per-connection server daemon (10.0.0.1:41938). Sep 13 10:15:44.046385 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 41938 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:15:44.047911 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:44.052455 systemd-logind[1531]: New session 12 of user core. Sep 13 10:15:44.061889 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 10:15:44.177439 sshd[4127]: Connection closed by 10.0.0.1 port 41938 Sep 13 10:15:44.177815 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:44.182047 systemd[1]: sshd@11-10.0.0.20:22-10.0.0.1:41938.service: Deactivated successfully. Sep 13 10:15:44.184294 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 10:15:44.185299 systemd-logind[1531]: Session 12 logged out. Waiting for processes to exit. Sep 13 10:15:44.186694 systemd-logind[1531]: Removed session 12. Sep 13 10:15:49.193805 systemd[1]: Started sshd@12-10.0.0.20:22-10.0.0.1:41944.service - OpenSSH per-connection server daemon (10.0.0.1:41944). Sep 13 10:15:49.250828 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 41944 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:15:49.251901 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:49.256583 systemd-logind[1531]: New session 13 of user core. Sep 13 10:15:49.270917 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 10:15:49.377062 sshd[4145]: Connection closed by 10.0.0.1 port 41944 Sep 13 10:15:49.377428 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:49.388385 systemd[1]: sshd@12-10.0.0.20:22-10.0.0.1:41944.service: Deactivated successfully. Sep 13 10:15:49.390252 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 10:15:49.391156 systemd-logind[1531]: Session 13 logged out. Waiting for processes to exit. Sep 13 10:15:49.393676 systemd[1]: Started sshd@13-10.0.0.20:22-10.0.0.1:41948.service - OpenSSH per-connection server daemon (10.0.0.1:41948). Sep 13 10:15:49.394347 systemd-logind[1531]: Removed session 13. Sep 13 10:15:49.443098 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 41948 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:15:49.444395 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:49.448521 systemd-logind[1531]: New session 14 of user core. Sep 13 10:15:49.458890 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 10:15:49.597785 sshd[4163]: Connection closed by 10.0.0.1 port 41948 Sep 13 10:15:49.598133 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:49.611465 systemd[1]: sshd@13-10.0.0.20:22-10.0.0.1:41948.service: Deactivated successfully. Sep 13 10:15:49.615343 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 10:15:49.616879 systemd-logind[1531]: Session 14 logged out. Waiting for processes to exit. Sep 13 10:15:49.620559 systemd-logind[1531]: Removed session 14. Sep 13 10:15:49.621998 systemd[1]: Started sshd@14-10.0.0.20:22-10.0.0.1:41962.service - OpenSSH per-connection server daemon (10.0.0.1:41962). Sep 13 10:15:49.674583 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 41962 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:15:49.676198 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:49.681005 systemd-logind[1531]: New session 15 of user core. Sep 13 10:15:49.690893 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 10:15:49.801464 sshd[4178]: Connection closed by 10.0.0.1 port 41962 Sep 13 10:15:49.801726 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:49.806968 systemd[1]: sshd@14-10.0.0.20:22-10.0.0.1:41962.service: Deactivated successfully. Sep 13 10:15:49.809113 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 10:15:49.809989 systemd-logind[1531]: Session 15 logged out. Waiting for processes to exit. Sep 13 10:15:49.811236 systemd-logind[1531]: Removed session 15. Sep 13 10:15:54.817379 systemd[1]: Started sshd@15-10.0.0.20:22-10.0.0.1:53094.service - OpenSSH per-connection server daemon (10.0.0.1:53094). Sep 13 10:15:54.868930 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 53094 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:15:54.870225 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:54.874637 systemd-logind[1531]: New session 16 of user core. Sep 13 10:15:54.880908 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 10:15:54.985787 sshd[4194]: Connection closed by 10.0.0.1 port 53094 Sep 13 10:15:54.986218 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:54.990252 systemd[1]: sshd@15-10.0.0.20:22-10.0.0.1:53094.service: Deactivated successfully. Sep 13 10:15:54.992467 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 10:15:54.993387 systemd-logind[1531]: Session 16 logged out. Waiting for processes to exit. Sep 13 10:15:54.994715 systemd-logind[1531]: Removed session 16. Sep 13 10:16:00.006165 systemd[1]: Started sshd@16-10.0.0.20:22-10.0.0.1:59312.service - OpenSSH per-connection server daemon (10.0.0.1:59312). Sep 13 10:16:00.055972 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 59312 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:00.057713 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:00.062721 systemd-logind[1531]: New session 17 of user core. Sep 13 10:16:00.071985 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 10:16:00.186387 sshd[4210]: Connection closed by 10.0.0.1 port 59312 Sep 13 10:16:00.186829 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:00.192046 systemd[1]: sshd@16-10.0.0.20:22-10.0.0.1:59312.service: Deactivated successfully. Sep 13 10:16:00.194655 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 10:16:00.195803 systemd-logind[1531]: Session 17 logged out. Waiting for processes to exit. Sep 13 10:16:00.197611 systemd-logind[1531]: Removed session 17. Sep 13 10:16:05.204146 systemd[1]: Started sshd@17-10.0.0.20:22-10.0.0.1:59326.service - OpenSSH per-connection server daemon (10.0.0.1:59326). Sep 13 10:16:05.263670 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 59326 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:05.265298 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:05.269931 systemd-logind[1531]: New session 18 of user core. Sep 13 10:16:05.277883 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 10:16:05.385687 sshd[4228]: Connection closed by 10.0.0.1 port 59326 Sep 13 10:16:05.386117 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:05.394599 systemd[1]: sshd@17-10.0.0.20:22-10.0.0.1:59326.service: Deactivated successfully. Sep 13 10:16:05.396709 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 10:16:05.397591 systemd-logind[1531]: Session 18 logged out. Waiting for processes to exit. Sep 13 10:16:05.400734 systemd[1]: Started sshd@18-10.0.0.20:22-10.0.0.1:59334.service - OpenSSH per-connection server daemon (10.0.0.1:59334). Sep 13 10:16:05.401441 systemd-logind[1531]: Removed session 18. Sep 13 10:16:05.458616 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 59334 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:05.460673 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:05.465490 systemd-logind[1531]: New session 19 of user core. Sep 13 10:16:05.472895 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 10:16:05.808950 sshd[4245]: Connection closed by 10.0.0.1 port 59334 Sep 13 10:16:05.809383 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:05.818794 systemd[1]: sshd@18-10.0.0.20:22-10.0.0.1:59334.service: Deactivated successfully. Sep 13 10:16:05.820943 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 10:16:05.821833 systemd-logind[1531]: Session 19 logged out. Waiting for processes to exit. Sep 13 10:16:05.824688 systemd[1]: Started sshd@19-10.0.0.20:22-10.0.0.1:59348.service - OpenSSH per-connection server daemon (10.0.0.1:59348). Sep 13 10:16:05.825426 systemd-logind[1531]: Removed session 19. Sep 13 10:16:05.881204 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 59348 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:05.883099 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:05.888076 systemd-logind[1531]: New session 20 of user core. Sep 13 10:16:05.896893 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 10:16:07.087283 sshd[4259]: Connection closed by 10.0.0.1 port 59348 Sep 13 10:16:07.087743 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:07.100158 systemd[1]: sshd@19-10.0.0.20:22-10.0.0.1:59348.service: Deactivated successfully. Sep 13 10:16:07.103332 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 10:16:07.106573 systemd-logind[1531]: Session 20 logged out. Waiting for processes to exit. Sep 13 10:16:07.111146 systemd[1]: Started sshd@20-10.0.0.20:22-10.0.0.1:59358.service - OpenSSH per-connection server daemon (10.0.0.1:59358). Sep 13 10:16:07.112127 systemd-logind[1531]: Removed session 20. Sep 13 10:16:07.162675 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 59358 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:07.164839 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:07.169598 systemd-logind[1531]: New session 21 of user core. Sep 13 10:16:07.176951 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 10:16:07.403917 sshd[4281]: Connection closed by 10.0.0.1 port 59358 Sep 13 10:16:07.404981 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:07.415004 systemd[1]: sshd@20-10.0.0.20:22-10.0.0.1:59358.service: Deactivated successfully. Sep 13 10:16:07.419442 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 10:16:07.420875 systemd-logind[1531]: Session 21 logged out. Waiting for processes to exit. Sep 13 10:16:07.423929 systemd[1]: Started sshd@21-10.0.0.20:22-10.0.0.1:59362.service - OpenSSH per-connection server daemon (10.0.0.1:59362). Sep 13 10:16:07.424790 systemd-logind[1531]: Removed session 21. Sep 13 10:16:07.479709 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 59362 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:07.481795 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:07.487437 systemd-logind[1531]: New session 22 of user core. Sep 13 10:16:07.502956 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 10:16:07.623182 sshd[4295]: Connection closed by 10.0.0.1 port 59362 Sep 13 10:16:07.623991 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:07.629487 systemd[1]: sshd@21-10.0.0.20:22-10.0.0.1:59362.service: Deactivated successfully. Sep 13 10:16:07.632234 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 10:16:07.633151 systemd-logind[1531]: Session 22 logged out. Waiting for processes to exit. Sep 13 10:16:07.634484 systemd-logind[1531]: Removed session 22. Sep 13 10:16:12.637672 systemd[1]: Started sshd@22-10.0.0.20:22-10.0.0.1:49168.service - OpenSSH per-connection server daemon (10.0.0.1:49168). Sep 13 10:16:12.703493 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 49168 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:12.704927 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:12.710364 systemd-logind[1531]: New session 23 of user core. Sep 13 10:16:12.716955 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 10:16:12.835073 sshd[4313]: Connection closed by 10.0.0.1 port 49168 Sep 13 10:16:12.835473 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:12.840935 systemd[1]: sshd@22-10.0.0.20:22-10.0.0.1:49168.service: Deactivated successfully. Sep 13 10:16:12.843846 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 10:16:12.844893 systemd-logind[1531]: Session 23 logged out. Waiting for processes to exit. Sep 13 10:16:12.846658 systemd-logind[1531]: Removed session 23. Sep 13 10:16:17.858104 systemd[1]: Started sshd@23-10.0.0.20:22-10.0.0.1:49176.service - OpenSSH per-connection server daemon (10.0.0.1:49176). Sep 13 10:16:17.919289 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 49176 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:17.920498 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:17.924913 systemd-logind[1531]: New session 24 of user core. Sep 13 10:16:17.934909 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 10:16:18.041516 sshd[4332]: Connection closed by 10.0.0.1 port 49176 Sep 13 10:16:18.041929 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:18.047129 systemd[1]: sshd@23-10.0.0.20:22-10.0.0.1:49176.service: Deactivated successfully. Sep 13 10:16:18.049294 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 10:16:18.050071 systemd-logind[1531]: Session 24 logged out. Waiting for processes to exit. Sep 13 10:16:18.051317 systemd-logind[1531]: Removed session 24. Sep 13 10:16:23.059084 systemd[1]: Started sshd@24-10.0.0.20:22-10.0.0.1:46438.service - OpenSSH per-connection server daemon (10.0.0.1:46438). Sep 13 10:16:23.122371 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 46438 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:23.124088 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:23.129078 systemd-logind[1531]: New session 25 of user core. Sep 13 10:16:23.141955 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 10:16:23.260238 sshd[4348]: Connection closed by 10.0.0.1 port 46438 Sep 13 10:16:23.260620 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:23.265377 systemd[1]: sshd@24-10.0.0.20:22-10.0.0.1:46438.service: Deactivated successfully. Sep 13 10:16:23.267603 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 10:16:23.268585 systemd-logind[1531]: Session 25 logged out. Waiting for processes to exit. Sep 13 10:16:23.269943 systemd-logind[1531]: Removed session 25. Sep 13 10:16:28.273970 systemd[1]: Started sshd@25-10.0.0.20:22-10.0.0.1:46452.service - OpenSSH per-connection server daemon (10.0.0.1:46452). Sep 13 10:16:28.328482 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 46452 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:28.329986 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:28.334489 systemd-logind[1531]: New session 26 of user core. Sep 13 10:16:28.345939 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 10:16:28.456818 sshd[4364]: Connection closed by 10.0.0.1 port 46452 Sep 13 10:16:28.457193 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:28.465977 systemd[1]: sshd@25-10.0.0.20:22-10.0.0.1:46452.service: Deactivated successfully. Sep 13 10:16:28.467970 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 10:16:28.468723 systemd-logind[1531]: Session 26 logged out. Waiting for processes to exit. Sep 13 10:16:28.471296 systemd[1]: Started sshd@26-10.0.0.20:22-10.0.0.1:46458.service - OpenSSH per-connection server daemon (10.0.0.1:46458). Sep 13 10:16:28.472093 systemd-logind[1531]: Removed session 26. Sep 13 10:16:28.519531 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 46458 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:28.520861 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:28.525112 systemd-logind[1531]: New session 27 of user core. Sep 13 10:16:28.538908 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 10:16:30.405605 containerd[1559]: time="2025-09-13T10:16:30.405432010Z" level=info msg="StopContainer for \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\" with timeout 30 (s)" Sep 13 10:16:30.412322 containerd[1559]: time="2025-09-13T10:16:30.412264671Z" level=info msg="Stop container \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\" with signal terminated" Sep 13 10:16:30.430248 systemd[1]: cri-containerd-fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92.scope: Deactivated successfully. Sep 13 10:16:30.432790 containerd[1559]: time="2025-09-13T10:16:30.432603042Z" level=info msg="received exit event container_id:\"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\" id:\"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\" pid:3338 exited_at:{seconds:1757758590 nanos:431329558}" Sep 13 10:16:30.433101 containerd[1559]: time="2025-09-13T10:16:30.433071383Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\" id:\"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\" pid:3338 exited_at:{seconds:1757758590 nanos:431329558}" Sep 13 10:16:30.433476 containerd[1559]: time="2025-09-13T10:16:30.433451116Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 10:16:30.441161 containerd[1559]: time="2025-09-13T10:16:30.441084351Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" id:\"b13cbe5f660ccf4b8d9ef46e41706af7254228dcf7879aeb66f7b0019783d4fc\" pid:4401 exited_at:{seconds:1757758590 nanos:440539003}" Sep 13 10:16:30.443709 containerd[1559]: time="2025-09-13T10:16:30.443677124Z" level=info msg="StopContainer for \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" with timeout 2 (s)" Sep 13 10:16:30.444387 containerd[1559]: time="2025-09-13T10:16:30.444348842Z" level=info msg="Stop container \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" with signal terminated" Sep 13 10:16:30.453945 systemd-networkd[1491]: lxc_health: Link DOWN Sep 13 10:16:30.453956 systemd-networkd[1491]: lxc_health: Lost carrier Sep 13 10:16:30.465180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92-rootfs.mount: Deactivated successfully. Sep 13 10:16:30.478339 systemd[1]: cri-containerd-74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8.scope: Deactivated successfully. Sep 13 10:16:30.478894 systemd[1]: cri-containerd-74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8.scope: Consumed 6.837s CPU time, 124.6M memory peak, 220K read from disk, 13.3M written to disk. Sep 13 10:16:30.479069 containerd[1559]: time="2025-09-13T10:16:30.478928220Z" level=info msg="received exit event container_id:\"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" id:\"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" pid:3402 exited_at:{seconds:1757758590 nanos:478564769}" Sep 13 10:16:30.479322 containerd[1559]: time="2025-09-13T10:16:30.479294437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" id:\"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" pid:3402 exited_at:{seconds:1757758590 nanos:478564769}" Sep 13 10:16:30.505548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8-rootfs.mount: Deactivated successfully. Sep 13 10:16:30.509299 containerd[1559]: time="2025-09-13T10:16:30.509238766Z" level=info msg="StopContainer for \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\" returns successfully" Sep 13 10:16:30.511872 containerd[1559]: time="2025-09-13T10:16:30.511820358Z" level=info msg="StopPodSandbox for \"3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d\"" Sep 13 10:16:30.520644 containerd[1559]: time="2025-09-13T10:16:30.520589395Z" level=info msg="Container to stop \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:16:30.529728 systemd[1]: cri-containerd-3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d.scope: Deactivated successfully. Sep 13 10:16:30.531054 containerd[1559]: time="2025-09-13T10:16:30.531001647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d\" id:\"3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d\" pid:2986 exit_status:137 exited_at:{seconds:1757758590 nanos:530593340}" Sep 13 10:16:30.533158 containerd[1559]: time="2025-09-13T10:16:30.533124427Z" level=info msg="StopContainer for \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" returns successfully" Sep 13 10:16:30.533662 containerd[1559]: time="2025-09-13T10:16:30.533628366Z" level=info msg="StopPodSandbox for \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\"" Sep 13 10:16:30.533744 containerd[1559]: time="2025-09-13T10:16:30.533701104Z" level=info msg="Container to stop \"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:16:30.533744 containerd[1559]: time="2025-09-13T10:16:30.533716954Z" level=info msg="Container to stop \"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:16:30.533744 containerd[1559]: time="2025-09-13T10:16:30.533728005Z" level=info msg="Container to stop \"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:16:30.533744 containerd[1559]: time="2025-09-13T10:16:30.533736542Z" level=info msg="Container to stop \"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:16:30.533744 containerd[1559]: time="2025-09-13T10:16:30.533745128Z" level=info msg="Container to stop \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:16:30.540974 systemd[1]: cri-containerd-671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0.scope: Deactivated successfully. Sep 13 10:16:30.567631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0-rootfs.mount: Deactivated successfully. Sep 13 10:16:30.571938 containerd[1559]: time="2025-09-13T10:16:30.571889791Z" level=info msg="shim disconnected" id=671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0 namespace=k8s.io Sep 13 10:16:30.572179 containerd[1559]: time="2025-09-13T10:16:30.571920850Z" level=warning msg="cleaning up after shim disconnected" id=671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0 namespace=k8s.io Sep 13 10:16:30.574537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d-rootfs.mount: Deactivated successfully. Sep 13 10:16:30.579409 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d-shm.mount: Deactivated successfully. Sep 13 10:16:30.596958 containerd[1559]: time="2025-09-13T10:16:30.572166186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 10:16:30.597204 containerd[1559]: time="2025-09-13T10:16:30.576596506Z" level=info msg="shim disconnected" id=3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d namespace=k8s.io Sep 13 10:16:30.597204 containerd[1559]: time="2025-09-13T10:16:30.597023455Z" level=warning msg="cleaning up after shim disconnected" id=3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d namespace=k8s.io Sep 13 10:16:30.597204 containerd[1559]: time="2025-09-13T10:16:30.597031221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 10:16:30.605024 containerd[1559]: time="2025-09-13T10:16:30.604815833Z" level=info msg="TearDown network for sandbox \"3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d\" successfully" Sep 13 10:16:30.605024 containerd[1559]: time="2025-09-13T10:16:30.604870877Z" level=info msg="StopPodSandbox for \"3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d\" returns successfully" Sep 13 10:16:30.608668 containerd[1559]: time="2025-09-13T10:16:30.608621144Z" level=info msg="received exit event sandbox_id:\"3aae1c7284bf74222b653f0bb471df1b279cf790e495b49137e840789c73111d\" exit_status:137 exited_at:{seconds:1757758590 nanos:530593340}" Sep 13 10:16:30.625243 containerd[1559]: time="2025-09-13T10:16:30.625193078Z" level=info msg="TaskExit event in podsandbox handler container_id:\"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" id:\"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" pid:2917 exit_status:137 exited_at:{seconds:1757758590 nanos:541490596}" Sep 13 10:16:30.625419 containerd[1559]: time="2025-09-13T10:16:30.625207274Z" level=info msg="received exit event sandbox_id:\"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" exit_status:137 exited_at:{seconds:1757758590 nanos:541490596}" Sep 13 10:16:30.625638 containerd[1559]: time="2025-09-13T10:16:30.625599220Z" level=info msg="TearDown network for sandbox \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" successfully" Sep 13 10:16:30.625666 containerd[1559]: time="2025-09-13T10:16:30.625638625Z" level=info msg="StopPodSandbox for \"671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0\" returns successfully" Sep 13 10:16:30.781491 kubelet[2742]: I0913 10:16:30.781346 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43217a5d-b542-4265-85a1-8b896b235eba-clustermesh-secrets\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.781491 kubelet[2742]: I0913 10:16:30.781391 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-cilium-run\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.781491 kubelet[2742]: I0913 10:16:30.781406 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-hostproc\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.781491 kubelet[2742]: I0913 10:16:30.781419 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-bpf-maps\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.781491 kubelet[2742]: I0913 10:16:30.781443 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s8l8\" (UniqueName: \"kubernetes.io/projected/43217a5d-b542-4265-85a1-8b896b235eba-kube-api-access-5s8l8\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.781491 kubelet[2742]: I0913 10:16:30.781458 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-cilium-cgroup\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.782731 kubelet[2742]: I0913 10:16:30.781475 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gjtn\" (UniqueName: \"kubernetes.io/projected/90a12d8c-758c-4b9f-b3c1-f70ae6adb997-kube-api-access-8gjtn\") pod \"90a12d8c-758c-4b9f-b3c1-f70ae6adb997\" (UID: \"90a12d8c-758c-4b9f-b3c1-f70ae6adb997\") " Sep 13 10:16:30.782731 kubelet[2742]: I0913 10:16:30.781512 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90a12d8c-758c-4b9f-b3c1-f70ae6adb997-cilium-config-path\") pod \"90a12d8c-758c-4b9f-b3c1-f70ae6adb997\" (UID: \"90a12d8c-758c-4b9f-b3c1-f70ae6adb997\") " Sep 13 10:16:30.782731 kubelet[2742]: I0913 10:16:30.781528 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43217a5d-b542-4265-85a1-8b896b235eba-hubble-tls\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.782731 kubelet[2742]: I0913 10:16:30.781542 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-xtables-lock\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.782731 kubelet[2742]: I0913 10:16:30.781555 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-cni-path\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.782731 kubelet[2742]: I0913 10:16:30.781570 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-etc-cni-netd\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.782933 kubelet[2742]: I0913 10:16:30.781550 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-hostproc" (OuterVolumeSpecName: "hostproc") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 10:16:30.782933 kubelet[2742]: I0913 10:16:30.781589 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 10:16:30.782933 kubelet[2742]: I0913 10:16:30.781617 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 10:16:30.782933 kubelet[2742]: I0913 10:16:30.781584 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-host-proc-sys-kernel\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.782933 kubelet[2742]: I0913 10:16:30.781673 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-host-proc-sys-net\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.783159 kubelet[2742]: I0913 10:16:30.781695 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43217a5d-b542-4265-85a1-8b896b235eba-cilium-config-path\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.783159 kubelet[2742]: I0913 10:16:30.781710 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-lib-modules\") pod \"43217a5d-b542-4265-85a1-8b896b235eba\" (UID: \"43217a5d-b542-4265-85a1-8b896b235eba\") " Sep 13 10:16:30.783159 kubelet[2742]: I0913 10:16:30.781799 2742 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.783159 kubelet[2742]: I0913 10:16:30.781815 2742 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.783159 kubelet[2742]: I0913 10:16:30.781826 2742 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.783159 kubelet[2742]: I0913 10:16:30.781845 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 10:16:30.783332 kubelet[2742]: I0913 10:16:30.781550 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 10:16:30.783332 kubelet[2742]: I0913 10:16:30.781867 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 10:16:30.785181 kubelet[2742]: I0913 10:16:30.785147 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43217a5d-b542-4265-85a1-8b896b235eba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 10:16:30.785256 kubelet[2742]: I0913 10:16:30.785207 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 10:16:30.785981 kubelet[2742]: I0913 10:16:30.785954 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43217a5d-b542-4265-85a1-8b896b235eba-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 10:16:30.786091 kubelet[2742]: I0913 10:16:30.786076 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 10:16:30.786163 kubelet[2742]: I0913 10:16:30.786150 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-cni-path" (OuterVolumeSpecName: "cni-path") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 10:16:30.786240 kubelet[2742]: I0913 10:16:30.786224 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 10:16:30.786365 kubelet[2742]: I0913 10:16:30.786327 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43217a5d-b542-4265-85a1-8b896b235eba-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 10:16:30.786565 kubelet[2742]: I0913 10:16:30.786411 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a12d8c-758c-4b9f-b3c1-f70ae6adb997-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "90a12d8c-758c-4b9f-b3c1-f70ae6adb997" (UID: "90a12d8c-758c-4b9f-b3c1-f70ae6adb997"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 10:16:30.788043 kubelet[2742]: I0913 10:16:30.788009 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43217a5d-b542-4265-85a1-8b896b235eba-kube-api-access-5s8l8" (OuterVolumeSpecName: "kube-api-access-5s8l8") pod "43217a5d-b542-4265-85a1-8b896b235eba" (UID: "43217a5d-b542-4265-85a1-8b896b235eba"). InnerVolumeSpecName "kube-api-access-5s8l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 10:16:30.788959 kubelet[2742]: I0913 10:16:30.788926 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a12d8c-758c-4b9f-b3c1-f70ae6adb997-kube-api-access-8gjtn" (OuterVolumeSpecName: "kube-api-access-8gjtn") pod "90a12d8c-758c-4b9f-b3c1-f70ae6adb997" (UID: "90a12d8c-758c-4b9f-b3c1-f70ae6adb997"). InnerVolumeSpecName "kube-api-access-8gjtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 10:16:30.882338 kubelet[2742]: I0913 10:16:30.882275 2742 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.882338 kubelet[2742]: I0913 10:16:30.882310 2742 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43217a5d-b542-4265-85a1-8b896b235eba-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.882338 kubelet[2742]: I0913 10:16:30.882320 2742 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.882338 kubelet[2742]: I0913 10:16:30.882330 2742 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43217a5d-b542-4265-85a1-8b896b235eba-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.882338 kubelet[2742]: I0913 10:16:30.882340 2742 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.882338 kubelet[2742]: I0913 10:16:30.882348 2742 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s8l8\" (UniqueName: \"kubernetes.io/projected/43217a5d-b542-4265-85a1-8b896b235eba-kube-api-access-5s8l8\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.882338 kubelet[2742]: I0913 10:16:30.882357 2742 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.882338 kubelet[2742]: I0913 10:16:30.882365 2742 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gjtn\" (UniqueName: \"kubernetes.io/projected/90a12d8c-758c-4b9f-b3c1-f70ae6adb997-kube-api-access-8gjtn\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.882706 kubelet[2742]: I0913 10:16:30.882373 2742 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43217a5d-b542-4265-85a1-8b896b235eba-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.882706 kubelet[2742]: I0913 10:16:30.882381 2742 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.882706 kubelet[2742]: I0913 10:16:30.882389 2742 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.882706 kubelet[2742]: I0913 10:16:30.882396 2742 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90a12d8c-758c-4b9f-b3c1-f70ae6adb997-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:30.882706 kubelet[2742]: I0913 10:16:30.882404 2742 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43217a5d-b542-4265-85a1-8b896b235eba-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:31.025246 kubelet[2742]: I0913 10:16:31.025193 2742 scope.go:117] "RemoveContainer" containerID="74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8" Sep 13 10:16:31.028689 containerd[1559]: time="2025-09-13T10:16:31.028641393Z" level=info msg="RemoveContainer for \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\"" Sep 13 10:16:31.035087 containerd[1559]: time="2025-09-13T10:16:31.034982325Z" level=info msg="RemoveContainer for \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" returns successfully" Sep 13 10:16:31.039303 kubelet[2742]: I0913 10:16:31.039258 2742 scope.go:117] "RemoveContainer" containerID="e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a" Sep 13 10:16:31.041032 containerd[1559]: time="2025-09-13T10:16:31.041000091Z" level=info msg="RemoveContainer for \"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\"" Sep 13 10:16:31.041525 systemd[1]: Removed slice kubepods-besteffort-pod90a12d8c_758c_4b9f_b3c1_f70ae6adb997.slice - libcontainer container kubepods-besteffort-pod90a12d8c_758c_4b9f_b3c1_f70ae6adb997.slice. Sep 13 10:16:31.044561 systemd[1]: Removed slice kubepods-burstable-pod43217a5d_b542_4265_85a1_8b896b235eba.slice - libcontainer container kubepods-burstable-pod43217a5d_b542_4265_85a1_8b896b235eba.slice. Sep 13 10:16:31.044812 systemd[1]: kubepods-burstable-pod43217a5d_b542_4265_85a1_8b896b235eba.slice: Consumed 6.963s CPU time, 124.9M memory peak, 228K read from disk, 13.3M written to disk. Sep 13 10:16:31.046121 containerd[1559]: time="2025-09-13T10:16:31.046077560Z" level=info msg="RemoveContainer for \"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\" returns successfully" Sep 13 10:16:31.046372 kubelet[2742]: I0913 10:16:31.046336 2742 scope.go:117] "RemoveContainer" containerID="56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013" Sep 13 10:16:31.048900 containerd[1559]: time="2025-09-13T10:16:31.048861836Z" level=info msg="RemoveContainer for \"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\"" Sep 13 10:16:31.054162 containerd[1559]: time="2025-09-13T10:16:31.053836379Z" level=info msg="RemoveContainer for \"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\" returns successfully" Sep 13 10:16:31.054294 kubelet[2742]: I0913 10:16:31.054060 2742 scope.go:117] "RemoveContainer" containerID="8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85" Sep 13 10:16:31.056352 containerd[1559]: time="2025-09-13T10:16:31.056327978Z" level=info msg="RemoveContainer for \"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\"" Sep 13 10:16:31.060775 containerd[1559]: time="2025-09-13T10:16:31.060623268Z" level=info msg="RemoveContainer for \"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\" returns successfully" Sep 13 10:16:31.060904 kubelet[2742]: I0913 10:16:31.060879 2742 scope.go:117] "RemoveContainer" containerID="e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc" Sep 13 10:16:31.067548 containerd[1559]: time="2025-09-13T10:16:31.067514046Z" level=info msg="RemoveContainer for \"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\"" Sep 13 10:16:31.075531 containerd[1559]: time="2025-09-13T10:16:31.075500217Z" level=info msg="RemoveContainer for \"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\" returns successfully" Sep 13 10:16:31.075711 kubelet[2742]: I0913 10:16:31.075655 2742 scope.go:117] "RemoveContainer" containerID="74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8" Sep 13 10:16:31.087770 containerd[1559]: time="2025-09-13T10:16:31.075853610Z" level=error msg="ContainerStatus for \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\": not found" Sep 13 10:16:31.088392 kubelet[2742]: E0913 10:16:31.088354 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\": not found" containerID="74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8" Sep 13 10:16:31.089446 kubelet[2742]: I0913 10:16:31.089353 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8"} err="failed to get container status \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"74e33cc71bd020bd41ae417412f1442a09aecd575d582895e759e4c6a6da13c8\": not found" Sep 13 10:16:31.089446 kubelet[2742]: I0913 10:16:31.089437 2742 scope.go:117] "RemoveContainer" containerID="e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a" Sep 13 10:16:31.089623 containerd[1559]: time="2025-09-13T10:16:31.089592494Z" level=error msg="ContainerStatus for \"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\": not found" Sep 13 10:16:31.089812 kubelet[2742]: E0913 10:16:31.089746 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\": not found" containerID="e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a" Sep 13 10:16:31.089848 kubelet[2742]: I0913 10:16:31.089816 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a"} err="failed to get container status \"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9095822a1e338f719293e2cd3dd5a82a1bfa9521353927fd0c202e5a091021a\": not found" Sep 13 10:16:31.089880 kubelet[2742]: I0913 10:16:31.089851 2742 scope.go:117] "RemoveContainer" containerID="56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013" Sep 13 10:16:31.090131 containerd[1559]: time="2025-09-13T10:16:31.090084920Z" level=error msg="ContainerStatus for \"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\": not found" Sep 13 10:16:31.090249 kubelet[2742]: E0913 10:16:31.090222 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\": not found" containerID="56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013" Sep 13 10:16:31.090310 kubelet[2742]: I0913 10:16:31.090248 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013"} err="failed to get container status \"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\": rpc error: code = NotFound desc = an error occurred when try to find container \"56b9c0c3869877783f877d6edf95c4d0961dd478c41196d48ff7c94a0b3ff013\": not found" Sep 13 10:16:31.090310 kubelet[2742]: I0913 10:16:31.090265 2742 scope.go:117] "RemoveContainer" containerID="8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85" Sep 13 10:16:31.090622 containerd[1559]: time="2025-09-13T10:16:31.090422783Z" level=error msg="ContainerStatus for \"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\": not found" Sep 13 10:16:31.090667 kubelet[2742]: E0913 10:16:31.090515 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\": not found" containerID="8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85" Sep 13 10:16:31.090667 kubelet[2742]: I0913 10:16:31.090535 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85"} err="failed to get container status \"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f0452323eafd6f0fdf8992f7c2a373c49dddf39c28707b3a504c9af4864bd85\": not found" Sep 13 10:16:31.090667 kubelet[2742]: I0913 10:16:31.090548 2742 scope.go:117] "RemoveContainer" containerID="e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc" Sep 13 10:16:31.090738 containerd[1559]: time="2025-09-13T10:16:31.090697496Z" level=error msg="ContainerStatus for \"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\": not found" Sep 13 10:16:31.090871 kubelet[2742]: E0913 10:16:31.090835 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\": not found" containerID="e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc" Sep 13 10:16:31.090898 kubelet[2742]: I0913 10:16:31.090870 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc"} err="failed to get container status \"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1cde16f93bfb8c81e6c4c0b2a71138b98d1fdc5a09a6247ebf9f130007f57bc\": not found" Sep 13 10:16:31.090898 kubelet[2742]: I0913 10:16:31.090887 2742 scope.go:117] "RemoveContainer" containerID="fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92" Sep 13 10:16:31.092289 containerd[1559]: time="2025-09-13T10:16:31.092257453Z" level=info msg="RemoveContainer for \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\"" Sep 13 10:16:31.096018 containerd[1559]: time="2025-09-13T10:16:31.095982489Z" level=info msg="RemoveContainer for \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\" returns successfully" Sep 13 10:16:31.096157 kubelet[2742]: I0913 10:16:31.096116 2742 scope.go:117] "RemoveContainer" containerID="fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92" Sep 13 10:16:31.096311 containerd[1559]: time="2025-09-13T10:16:31.096272781Z" level=error msg="ContainerStatus for \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\": not found" Sep 13 10:16:31.096405 kubelet[2742]: E0913 10:16:31.096381 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\": not found" containerID="fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92" Sep 13 10:16:31.096438 kubelet[2742]: I0913 10:16:31.096404 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92"} err="failed to get container status \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd5ae543f44fb8c1b4d687b20bb0a58a81caaef3337a23192ff6225d7daf9f92\": not found" Sep 13 10:16:31.465016 systemd[1]: var-lib-kubelet-pods-90a12d8c\x2d758c\x2d4b9f\x2db3c1\x2df70ae6adb997-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8gjtn.mount: Deactivated successfully. Sep 13 10:16:31.465126 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-671921def32d30c20f64b4b17110f7a207d5dbdec1c4b1709851e3c250a781f0-shm.mount: Deactivated successfully. Sep 13 10:16:31.465205 systemd[1]: var-lib-kubelet-pods-43217a5d\x2db542\x2d4265\x2d85a1\x2d8b896b235eba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5s8l8.mount: Deactivated successfully. Sep 13 10:16:31.465297 systemd[1]: var-lib-kubelet-pods-43217a5d\x2db542\x2d4265\x2d85a1\x2d8b896b235eba-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 10:16:31.465383 systemd[1]: var-lib-kubelet-pods-43217a5d\x2db542\x2d4265\x2d85a1\x2d8b896b235eba-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 10:16:31.670788 kubelet[2742]: I0913 10:16:31.670703 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43217a5d-b542-4265-85a1-8b896b235eba" path="/var/lib/kubelet/pods/43217a5d-b542-4265-85a1-8b896b235eba/volumes" Sep 13 10:16:31.671605 kubelet[2742]: I0913 10:16:31.671573 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a12d8c-758c-4b9f-b3c1-f70ae6adb997" path="/var/lib/kubelet/pods/90a12d8c-758c-4b9f-b3c1-f70ae6adb997/volumes" Sep 13 10:16:31.725913 kubelet[2742]: E0913 10:16:31.725790 2742 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 10:16:32.000033 sshd[4380]: Connection closed by 10.0.0.1 port 46458 Sep 13 10:16:32.000471 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:32.013967 systemd[1]: sshd@26-10.0.0.20:22-10.0.0.1:46458.service: Deactivated successfully. Sep 13 10:16:32.015873 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 10:16:32.016644 systemd-logind[1531]: Session 27 logged out. Waiting for processes to exit. Sep 13 10:16:32.019553 systemd[1]: Started sshd@27-10.0.0.20:22-10.0.0.1:42692.service - OpenSSH per-connection server daemon (10.0.0.1:42692). Sep 13 10:16:32.020252 systemd-logind[1531]: Removed session 27. Sep 13 10:16:32.073726 sshd[4529]: Accepted publickey for core from 10.0.0.1 port 42692 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:32.075082 sshd-session[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:32.079964 systemd-logind[1531]: New session 28 of user core. Sep 13 10:16:32.088882 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 10:16:32.597234 sshd[4532]: Connection closed by 10.0.0.1 port 42692 Sep 13 10:16:32.598956 sshd-session[4529]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:32.613438 systemd[1]: sshd@27-10.0.0.20:22-10.0.0.1:42692.service: Deactivated successfully. Sep 13 10:16:32.617083 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 10:16:32.620303 systemd-logind[1531]: Session 28 logged out. Waiting for processes to exit. Sep 13 10:16:32.621124 kubelet[2742]: E0913 10:16:32.620366 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="43217a5d-b542-4265-85a1-8b896b235eba" containerName="cilium-agent" Sep 13 10:16:32.624796 kubelet[2742]: E0913 10:16:32.622836 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="43217a5d-b542-4265-85a1-8b896b235eba" containerName="mount-cgroup" Sep 13 10:16:32.624796 kubelet[2742]: E0913 10:16:32.622860 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="43217a5d-b542-4265-85a1-8b896b235eba" containerName="apply-sysctl-overwrites" Sep 13 10:16:32.624796 kubelet[2742]: E0913 10:16:32.622872 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="43217a5d-b542-4265-85a1-8b896b235eba" containerName="mount-bpf-fs" Sep 13 10:16:32.624796 kubelet[2742]: E0913 10:16:32.622881 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90a12d8c-758c-4b9f-b3c1-f70ae6adb997" containerName="cilium-operator" Sep 13 10:16:32.624796 kubelet[2742]: E0913 10:16:32.622889 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="43217a5d-b542-4265-85a1-8b896b235eba" containerName="clean-cilium-state" Sep 13 10:16:32.624796 kubelet[2742]: I0913 10:16:32.622944 2742 memory_manager.go:354] "RemoveStaleState removing state" podUID="43217a5d-b542-4265-85a1-8b896b235eba" containerName="cilium-agent" Sep 13 10:16:32.624796 kubelet[2742]: I0913 10:16:32.622958 2742 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a12d8c-758c-4b9f-b3c1-f70ae6adb997" containerName="cilium-operator" Sep 13 10:16:32.622964 systemd[1]: Started sshd@28-10.0.0.20:22-10.0.0.1:42700.service - OpenSSH per-connection server daemon (10.0.0.1:42700). Sep 13 10:16:32.626514 systemd-logind[1531]: Removed session 28. Sep 13 10:16:32.641906 systemd[1]: Created slice kubepods-burstable-pod0cef168e_fc6a_4055_870f_5200a7da7795.slice - libcontainer container kubepods-burstable-pod0cef168e_fc6a_4055_870f_5200a7da7795.slice. Sep 13 10:16:32.676852 sshd[4545]: Accepted publickey for core from 10.0.0.1 port 42700 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:32.678200 sshd-session[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:32.683253 systemd-logind[1531]: New session 29 of user core. Sep 13 10:16:32.690893 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 13 10:16:32.742288 sshd[4548]: Connection closed by 10.0.0.1 port 42700 Sep 13 10:16:32.742784 sshd-session[4545]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:32.760650 systemd[1]: sshd@28-10.0.0.20:22-10.0.0.1:42700.service: Deactivated successfully. Sep 13 10:16:32.762686 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 10:16:32.763493 systemd-logind[1531]: Session 29 logged out. Waiting for processes to exit. Sep 13 10:16:32.766663 systemd[1]: Started sshd@29-10.0.0.20:22-10.0.0.1:42712.service - OpenSSH per-connection server daemon (10.0.0.1:42712). Sep 13 10:16:32.767335 systemd-logind[1531]: Removed session 29. Sep 13 10:16:32.792723 kubelet[2742]: I0913 10:16:32.792683 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cef168e-fc6a-4055-870f-5200a7da7795-lib-modules\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.792856 kubelet[2742]: I0913 10:16:32.792727 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0cef168e-fc6a-4055-870f-5200a7da7795-cilium-config-path\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.792856 kubelet[2742]: I0913 10:16:32.792749 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0cef168e-fc6a-4055-870f-5200a7da7795-host-proc-sys-kernel\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.792856 kubelet[2742]: I0913 10:16:32.792780 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcc5k\" (UniqueName: \"kubernetes.io/projected/0cef168e-fc6a-4055-870f-5200a7da7795-kube-api-access-jcc5k\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.792856 kubelet[2742]: I0913 10:16:32.792796 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0cef168e-fc6a-4055-870f-5200a7da7795-clustermesh-secrets\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.792856 kubelet[2742]: I0913 10:16:32.792813 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0cef168e-fc6a-4055-870f-5200a7da7795-cilium-run\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.792978 kubelet[2742]: I0913 10:16:32.792855 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0cef168e-fc6a-4055-870f-5200a7da7795-bpf-maps\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.792978 kubelet[2742]: I0913 10:16:32.792873 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0cef168e-fc6a-4055-870f-5200a7da7795-host-proc-sys-net\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.792978 kubelet[2742]: I0913 10:16:32.792926 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0cef168e-fc6a-4055-870f-5200a7da7795-etc-cni-netd\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.793058 kubelet[2742]: I0913 10:16:32.793008 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0cef168e-fc6a-4055-870f-5200a7da7795-hostproc\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.793084 kubelet[2742]: I0913 10:16:32.793057 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0cef168e-fc6a-4055-870f-5200a7da7795-cilium-ipsec-secrets\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.793111 kubelet[2742]: I0913 10:16:32.793096 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0cef168e-fc6a-4055-870f-5200a7da7795-hubble-tls\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.793141 kubelet[2742]: I0913 10:16:32.793119 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0cef168e-fc6a-4055-870f-5200a7da7795-cilium-cgroup\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.793141 kubelet[2742]: I0913 10:16:32.793135 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0cef168e-fc6a-4055-870f-5200a7da7795-cni-path\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.793199 kubelet[2742]: I0913 10:16:32.793148 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cef168e-fc6a-4055-870f-5200a7da7795-xtables-lock\") pod \"cilium-db9q9\" (UID: \"0cef168e-fc6a-4055-870f-5200a7da7795\") " pod="kube-system/cilium-db9q9" Sep 13 10:16:32.817437 sshd[4556]: Accepted publickey for core from 10.0.0.1 port 42712 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:16:32.819095 sshd-session[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:32.825227 systemd-logind[1531]: New session 30 of user core. Sep 13 10:16:32.842912 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 13 10:16:32.945101 kubelet[2742]: E0913 10:16:32.945048 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:32.945727 containerd[1559]: time="2025-09-13T10:16:32.945661178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-db9q9,Uid:0cef168e-fc6a-4055-870f-5200a7da7795,Namespace:kube-system,Attempt:0,}" Sep 13 10:16:32.968454 containerd[1559]: time="2025-09-13T10:16:32.968385180Z" level=info msg="connecting to shim 134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d" address="unix:///run/containerd/s/bfa619cc6d58102d7b26378d8d4e52eda8ebd41b83e8867fe86b7f053265586f" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:16:32.993891 systemd[1]: Started cri-containerd-134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d.scope - libcontainer container 134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d. Sep 13 10:16:33.018416 containerd[1559]: time="2025-09-13T10:16:33.018370871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-db9q9,Uid:0cef168e-fc6a-4055-870f-5200a7da7795,Namespace:kube-system,Attempt:0,} returns sandbox id \"134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d\"" Sep 13 10:16:33.019011 kubelet[2742]: E0913 10:16:33.018988 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:33.021245 containerd[1559]: time="2025-09-13T10:16:33.021208997Z" level=info msg="CreateContainer within sandbox \"134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 10:16:33.029343 containerd[1559]: time="2025-09-13T10:16:33.029298776Z" level=info msg="Container 0f78efa8fdf8ab977b4d2e0f9b9d205c1422e57dc98e17be8231965ece973111: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:16:33.036417 containerd[1559]: time="2025-09-13T10:16:33.036366631Z" level=info msg="CreateContainer within sandbox \"134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0f78efa8fdf8ab977b4d2e0f9b9d205c1422e57dc98e17be8231965ece973111\"" Sep 13 10:16:33.036877 containerd[1559]: time="2025-09-13T10:16:33.036815585Z" level=info msg="StartContainer for \"0f78efa8fdf8ab977b4d2e0f9b9d205c1422e57dc98e17be8231965ece973111\"" Sep 13 10:16:33.037853 containerd[1559]: time="2025-09-13T10:16:33.037825445Z" level=info msg="connecting to shim 0f78efa8fdf8ab977b4d2e0f9b9d205c1422e57dc98e17be8231965ece973111" address="unix:///run/containerd/s/bfa619cc6d58102d7b26378d8d4e52eda8ebd41b83e8867fe86b7f053265586f" protocol=ttrpc version=3 Sep 13 10:16:33.070155 systemd[1]: Started cri-containerd-0f78efa8fdf8ab977b4d2e0f9b9d205c1422e57dc98e17be8231965ece973111.scope - libcontainer container 0f78efa8fdf8ab977b4d2e0f9b9d205c1422e57dc98e17be8231965ece973111. Sep 13 10:16:33.101440 containerd[1559]: time="2025-09-13T10:16:33.101385842Z" level=info msg="StartContainer for \"0f78efa8fdf8ab977b4d2e0f9b9d205c1422e57dc98e17be8231965ece973111\" returns successfully" Sep 13 10:16:33.111809 systemd[1]: cri-containerd-0f78efa8fdf8ab977b4d2e0f9b9d205c1422e57dc98e17be8231965ece973111.scope: Deactivated successfully. Sep 13 10:16:33.112962 containerd[1559]: time="2025-09-13T10:16:33.112923194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f78efa8fdf8ab977b4d2e0f9b9d205c1422e57dc98e17be8231965ece973111\" id:\"0f78efa8fdf8ab977b4d2e0f9b9d205c1422e57dc98e17be8231965ece973111\" pid:4629 exited_at:{seconds:1757758593 nanos:112500941}" Sep 13 10:16:33.113019 containerd[1559]: time="2025-09-13T10:16:33.112959173Z" level=info msg="received exit event container_id:\"0f78efa8fdf8ab977b4d2e0f9b9d205c1422e57dc98e17be8231965ece973111\" id:\"0f78efa8fdf8ab977b4d2e0f9b9d205c1422e57dc98e17be8231965ece973111\" pid:4629 exited_at:{seconds:1757758593 nanos:112500941}" Sep 13 10:16:33.657140 kubelet[2742]: I0913 10:16:33.657075 2742 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T10:16:33Z","lastTransitionTime":"2025-09-13T10:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 10:16:33.669683 kubelet[2742]: E0913 10:16:33.669644 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:34.040650 kubelet[2742]: E0913 10:16:34.040361 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:34.042994 containerd[1559]: time="2025-09-13T10:16:34.042940863Z" level=info msg="CreateContainer within sandbox \"134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 10:16:34.054145 containerd[1559]: time="2025-09-13T10:16:34.053945376Z" level=info msg="Container 5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:16:34.062374 containerd[1559]: time="2025-09-13T10:16:34.062316214Z" level=info msg="CreateContainer within sandbox \"134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab\"" Sep 13 10:16:34.062959 containerd[1559]: time="2025-09-13T10:16:34.062895224Z" level=info msg="StartContainer for \"5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab\"" Sep 13 10:16:34.063907 containerd[1559]: time="2025-09-13T10:16:34.063880507Z" level=info msg="connecting to shim 5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab" address="unix:///run/containerd/s/bfa619cc6d58102d7b26378d8d4e52eda8ebd41b83e8867fe86b7f053265586f" protocol=ttrpc version=3 Sep 13 10:16:34.084892 systemd[1]: Started cri-containerd-5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab.scope - libcontainer container 5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab. Sep 13 10:16:34.127164 systemd[1]: cri-containerd-5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab.scope: Deactivated successfully. Sep 13 10:16:34.127898 containerd[1559]: time="2025-09-13T10:16:34.127858552Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab\" id:\"5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab\" pid:4677 exited_at:{seconds:1757758594 nanos:127478360}" Sep 13 10:16:34.132978 containerd[1559]: time="2025-09-13T10:16:34.132940870Z" level=info msg="received exit event container_id:\"5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab\" id:\"5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab\" pid:4677 exited_at:{seconds:1757758594 nanos:127478360}" Sep 13 10:16:34.134053 containerd[1559]: time="2025-09-13T10:16:34.134020492Z" level=info msg="StartContainer for \"5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab\" returns successfully" Sep 13 10:16:34.668840 kubelet[2742]: E0913 10:16:34.668798 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:34.906505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f4a82bb1c4fb8595c8602a1433fe96f1efe7f4154c5dbc530cc42c1509a76ab-rootfs.mount: Deactivated successfully. Sep 13 10:16:35.044645 kubelet[2742]: E0913 10:16:35.044491 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:35.046452 containerd[1559]: time="2025-09-13T10:16:35.046411596Z" level=info msg="CreateContainer within sandbox \"134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 10:16:35.057503 containerd[1559]: time="2025-09-13T10:16:35.057455999Z" level=info msg="Container 78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:16:35.084507 containerd[1559]: time="2025-09-13T10:16:35.084432440Z" level=info msg="CreateContainer within sandbox \"134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526\"" Sep 13 10:16:35.085117 containerd[1559]: time="2025-09-13T10:16:35.085084138Z" level=info msg="StartContainer for \"78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526\"" Sep 13 10:16:35.086552 containerd[1559]: time="2025-09-13T10:16:35.086515858Z" level=info msg="connecting to shim 78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526" address="unix:///run/containerd/s/bfa619cc6d58102d7b26378d8d4e52eda8ebd41b83e8867fe86b7f053265586f" protocol=ttrpc version=3 Sep 13 10:16:35.113974 systemd[1]: Started cri-containerd-78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526.scope - libcontainer container 78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526. Sep 13 10:16:35.158055 containerd[1559]: time="2025-09-13T10:16:35.157924364Z" level=info msg="StartContainer for \"78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526\" returns successfully" Sep 13 10:16:35.158737 systemd[1]: cri-containerd-78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526.scope: Deactivated successfully. Sep 13 10:16:35.160972 containerd[1559]: time="2025-09-13T10:16:35.160939142Z" level=info msg="received exit event container_id:\"78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526\" id:\"78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526\" pid:4722 exited_at:{seconds:1757758595 nanos:160730125}" Sep 13 10:16:35.161474 containerd[1559]: time="2025-09-13T10:16:35.161419655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526\" id:\"78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526\" pid:4722 exited_at:{seconds:1757758595 nanos:160730125}" Sep 13 10:16:35.183238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78c34767497747b0a91c8fb74d246994d33774a5733feb1c996af0d7ee5e5526-rootfs.mount: Deactivated successfully. Sep 13 10:16:36.049784 kubelet[2742]: E0913 10:16:36.049720 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:36.052182 containerd[1559]: time="2025-09-13T10:16:36.052140489Z" level=info msg="CreateContainer within sandbox \"134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 10:16:36.063774 containerd[1559]: time="2025-09-13T10:16:36.063707859Z" level=info msg="Container a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:16:36.073971 containerd[1559]: time="2025-09-13T10:16:36.073916859Z" level=info msg="CreateContainer within sandbox \"134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725\"" Sep 13 10:16:36.074455 containerd[1559]: time="2025-09-13T10:16:36.074419674Z" level=info msg="StartContainer for \"a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725\"" Sep 13 10:16:36.075354 containerd[1559]: time="2025-09-13T10:16:36.075320565Z" level=info msg="connecting to shim a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725" address="unix:///run/containerd/s/bfa619cc6d58102d7b26378d8d4e52eda8ebd41b83e8867fe86b7f053265586f" protocol=ttrpc version=3 Sep 13 10:16:36.098908 systemd[1]: Started cri-containerd-a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725.scope - libcontainer container a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725. Sep 13 10:16:36.126238 systemd[1]: cri-containerd-a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725.scope: Deactivated successfully. Sep 13 10:16:36.126679 containerd[1559]: time="2025-09-13T10:16:36.126643744Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725\" id:\"a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725\" pid:4761 exited_at:{seconds:1757758596 nanos:126432192}" Sep 13 10:16:36.127971 containerd[1559]: time="2025-09-13T10:16:36.127936420Z" level=info msg="received exit event container_id:\"a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725\" id:\"a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725\" pid:4761 exited_at:{seconds:1757758596 nanos:126432192}" Sep 13 10:16:36.135141 containerd[1559]: time="2025-09-13T10:16:36.135096748Z" level=info msg="StartContainer for \"a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725\" returns successfully" Sep 13 10:16:36.155373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2d5ff2c947cd71fb0e87284004c0009e79968fb1a3a1a61a7dd8afd7e1e8725-rootfs.mount: Deactivated successfully. Sep 13 10:16:36.727342 kubelet[2742]: E0913 10:16:36.727286 2742 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 10:16:37.059647 kubelet[2742]: E0913 10:16:37.059493 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:37.062219 containerd[1559]: time="2025-09-13T10:16:37.062168820Z" level=info msg="CreateContainer within sandbox \"134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 10:16:37.086148 containerd[1559]: time="2025-09-13T10:16:37.086080583Z" level=info msg="Container fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:16:37.088419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105539119.mount: Deactivated successfully. Sep 13 10:16:37.101681 containerd[1559]: time="2025-09-13T10:16:37.101623846Z" level=info msg="CreateContainer within sandbox \"134836d31c08f744421cf3b9a0f9d625225fc73335b3754430f706ee3c8c7f0d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18\"" Sep 13 10:16:37.102286 containerd[1559]: time="2025-09-13T10:16:37.102239366Z" level=info msg="StartContainer for \"fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18\"" Sep 13 10:16:37.103534 containerd[1559]: time="2025-09-13T10:16:37.103503485Z" level=info msg="connecting to shim fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18" address="unix:///run/containerd/s/bfa619cc6d58102d7b26378d8d4e52eda8ebd41b83e8867fe86b7f053265586f" protocol=ttrpc version=3 Sep 13 10:16:37.136922 systemd[1]: Started cri-containerd-fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18.scope - libcontainer container fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18. Sep 13 10:16:37.176601 containerd[1559]: time="2025-09-13T10:16:37.176554874Z" level=info msg="StartContainer for \"fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18\" returns successfully" Sep 13 10:16:37.252225 containerd[1559]: time="2025-09-13T10:16:37.252177046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18\" id:\"62ccfff3cd92fee1fda3c38669e7a5f5ed4a243b6e486f5983350d1417e05144\" pid:4829 exited_at:{seconds:1757758597 nanos:250909719}" Sep 13 10:16:37.631808 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 13 10:16:37.668835 kubelet[2742]: E0913 10:16:37.668737 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-9wkb7" podUID="9d69f580-d071-4777-8de0-d70e2ba18c6d" Sep 13 10:16:38.063974 kubelet[2742]: E0913 10:16:38.063852 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:39.065907 kubelet[2742]: E0913 10:16:39.065855 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:39.246010 containerd[1559]: time="2025-09-13T10:16:39.245940649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18\" id:\"94c7b7b203fbf9d427e324f38884c924d45ec0f3d06d923545907f7fe7999228\" pid:4972 exit_status:1 exited_at:{seconds:1757758599 nanos:245558273}" Sep 13 10:16:39.669409 kubelet[2742]: E0913 10:16:39.669346 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-gh4hj" podUID="ed5bf145-bbf2-49e9-a27d-5966c0d01e2e" Sep 13 10:16:39.669594 kubelet[2742]: E0913 10:16:39.669498 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-9wkb7" podUID="9d69f580-d071-4777-8de0-d70e2ba18c6d" Sep 13 10:16:40.068329 kubelet[2742]: E0913 10:16:40.068043 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:40.786621 systemd-networkd[1491]: lxc_health: Link UP Sep 13 10:16:40.791326 systemd-networkd[1491]: lxc_health: Gained carrier Sep 13 10:16:40.970139 kubelet[2742]: I0913 10:16:40.969987 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-db9q9" podStartSLOduration=8.969965323 podStartE2EDuration="8.969965323s" podCreationTimestamp="2025-09-13 10:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:16:38.079350437 +0000 UTC m=+96.507232385" watchObservedRunningTime="2025-09-13 10:16:40.969965323 +0000 UTC m=+99.397847271" Sep 13 10:16:41.070849 kubelet[2742]: E0913 10:16:41.070643 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:41.362033 containerd[1559]: time="2025-09-13T10:16:41.361807005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18\" id:\"844092f2ea237e5d274d954e76aa4aa2e6c892652e6f1e3da980d31c505c7b92\" pid:5359 exited_at:{seconds:1757758601 nanos:361290686}" Sep 13 10:16:41.669804 kubelet[2742]: E0913 10:16:41.669144 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-gh4hj" podUID="ed5bf145-bbf2-49e9-a27d-5966c0d01e2e" Sep 13 10:16:41.670412 kubelet[2742]: E0913 10:16:41.670321 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-9wkb7" podUID="9d69f580-d071-4777-8de0-d70e2ba18c6d" Sep 13 10:16:41.670412 kubelet[2742]: E0913 10:16:41.670394 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:42.072967 kubelet[2742]: E0913 10:16:42.072804 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:42.844085 systemd-networkd[1491]: lxc_health: Gained IPv6LL Sep 13 10:16:43.075172 kubelet[2742]: E0913 10:16:43.075125 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:43.471014 containerd[1559]: time="2025-09-13T10:16:43.470952685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18\" id:\"64a5995e618109adcce6057ec5231f746ec67b895856a949d462dca5553bdf27\" pid:5397 exited_at:{seconds:1757758603 nanos:470543369}" Sep 13 10:16:43.668781 kubelet[2742]: E0913 10:16:43.668715 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:43.669053 kubelet[2742]: E0913 10:16:43.669032 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:43.669328 kubelet[2742]: E0913 10:16:43.669298 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:45.574616 containerd[1559]: time="2025-09-13T10:16:45.574565268Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18\" id:\"853dde2181830cb7311f867bb091d6a088cc9b77696afceeb93d15eec61a6847\" pid:5428 exited_at:{seconds:1757758605 nanos:574055873}" Sep 13 10:16:47.664896 containerd[1559]: time="2025-09-13T10:16:47.664808235Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe43bb7cbebb15689a612bf195dae1a6b54b3744f28e5e2228fb7e11c88cbb18\" id:\"b2770f01a96702866b88422a34df1b51900d73579f21132c03be4f14c7a8726e\" pid:5452 exited_at:{seconds:1757758607 nanos:664413718}" Sep 13 10:16:47.675517 sshd[4559]: Connection closed by 10.0.0.1 port 42712 Sep 13 10:16:47.675907 sshd-session[4556]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:47.681076 systemd[1]: sshd@29-10.0.0.20:22-10.0.0.1:42712.service: Deactivated successfully. Sep 13 10:16:47.683333 systemd[1]: session-30.scope: Deactivated successfully. Sep 13 10:16:47.684299 systemd-logind[1531]: Session 30 logged out. Waiting for processes to exit. Sep 13 10:16:47.685858 systemd-logind[1531]: Removed session 30.