Sep 12 23:06:08.132509 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 20:38:35 -00 2025 Sep 12 23:06:08.132550 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 23:06:08.132563 kernel: BIOS-provided physical RAM map: Sep 12 23:06:08.132570 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 12 23:06:08.132576 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 12 23:06:08.132583 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 12 23:06:08.132591 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 12 23:06:08.132597 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 12 23:06:08.132608 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 12 23:06:08.132614 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 12 23:06:08.132621 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 12 23:06:08.132630 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 12 23:06:08.132637 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 12 23:06:08.132644 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 12 23:06:08.132652 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 12 23:06:08.132660 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 12 23:06:08.132673 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 23:06:08.132680 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 23:06:08.132687 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 23:06:08.132694 kernel: NX (Execute Disable) protection: active Sep 12 23:06:08.132702 kernel: APIC: Static calls initialized Sep 12 23:06:08.132709 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Sep 12 23:06:08.132716 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Sep 12 23:06:08.132723 kernel: extended physical RAM map: Sep 12 23:06:08.132731 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 12 23:06:08.132738 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 12 23:06:08.132745 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 12 23:06:08.132756 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 12 23:06:08.132763 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Sep 12 23:06:08.132770 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Sep 12 23:06:08.132777 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Sep 12 23:06:08.132784 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Sep 12 23:06:08.132791 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Sep 12 23:06:08.132798 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 12 23:06:08.132806 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 12 23:06:08.132813 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 12 23:06:08.132820 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 12 23:06:08.132834 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 12 23:06:08.132844 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 12 23:06:08.132851 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 12 23:06:08.132862 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 12 23:06:08.132869 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 23:06:08.132877 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 23:06:08.132884 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 23:06:08.132894 kernel: efi: EFI v2.7 by EDK II Sep 12 23:06:08.132902 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 12 23:06:08.132910 kernel: random: crng init done Sep 12 23:06:08.132917 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 12 23:06:08.132925 kernel: secureboot: Secure boot enabled Sep 12 23:06:08.132932 kernel: SMBIOS 2.8 present. Sep 12 23:06:08.132939 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 12 23:06:08.132947 kernel: DMI: Memory slots populated: 1/1 Sep 12 23:06:08.132954 kernel: Hypervisor detected: KVM Sep 12 23:06:08.132962 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 23:06:08.132969 kernel: kvm-clock: using sched offset of 8318340367 cycles Sep 12 23:06:08.132980 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 23:06:08.132988 kernel: tsc: Detected 2794.748 MHz processor Sep 12 23:06:08.132995 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 23:06:08.133003 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 23:06:08.133011 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 12 23:06:08.133018 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 23:06:08.133029 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 23:06:08.133039 kernel: Using GB pages for direct mapping Sep 12 23:06:08.133049 kernel: ACPI: Early table checksum verification disabled Sep 12 23:06:08.133059 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 12 23:06:08.133067 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 23:06:08.133075 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:06:08.133083 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:06:08.133090 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 12 23:06:08.133098 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:06:08.133106 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:06:08.133113 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:06:08.133121 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:06:08.133131 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 23:06:08.133139 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 12 23:06:08.133146 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 12 23:06:08.133154 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 12 23:06:08.133162 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 12 23:06:08.133169 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 12 23:06:08.133177 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 12 23:06:08.133184 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 12 23:06:08.133194 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 12 23:06:08.133202 kernel: No NUMA configuration found Sep 12 23:06:08.133209 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 12 23:06:08.133217 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 12 23:06:08.133229 kernel: Zone ranges: Sep 12 23:06:08.133247 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 23:06:08.133256 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 12 23:06:08.133264 kernel: Normal empty Sep 12 23:06:08.133288 kernel: Device empty Sep 12 23:06:08.133297 kernel: Movable zone start for each node Sep 12 23:06:08.133308 kernel: Early memory node ranges Sep 12 23:06:08.133316 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 12 23:06:08.133323 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 12 23:06:08.133331 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 12 23:06:08.133339 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 12 23:06:08.133346 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 12 23:06:08.133354 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 12 23:06:08.133362 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 23:06:08.133371 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 12 23:06:08.133382 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 23:06:08.133389 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 23:06:08.133397 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 12 23:06:08.133409 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 12 23:06:08.133418 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 23:06:08.133425 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 23:06:08.133433 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 23:06:08.133441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 23:06:08.133448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 23:06:08.133462 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 23:06:08.133469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 23:06:08.133477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 23:06:08.133485 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 23:06:08.133492 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 23:06:08.133500 kernel: TSC deadline timer available Sep 12 23:06:08.133507 kernel: CPU topo: Max. logical packages: 1 Sep 12 23:06:08.133515 kernel: CPU topo: Max. logical dies: 1 Sep 12 23:06:08.133523 kernel: CPU topo: Max. dies per package: 1 Sep 12 23:06:08.133553 kernel: CPU topo: Max. threads per core: 1 Sep 12 23:06:08.133561 kernel: CPU topo: Num. cores per package: 4 Sep 12 23:06:08.133568 kernel: CPU topo: Num. threads per package: 4 Sep 12 23:06:08.133578 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 23:06:08.133589 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 23:06:08.133597 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 23:06:08.133605 kernel: kvm-guest: setup PV sched yield Sep 12 23:06:08.133613 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 12 23:06:08.133623 kernel: Booting paravirtualized kernel on KVM Sep 12 23:06:08.133632 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 23:06:08.133640 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 23:06:08.133648 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 23:06:08.133656 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 23:06:08.133664 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 23:06:08.133672 kernel: kvm-guest: PV spinlocks enabled Sep 12 23:06:08.133680 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 23:06:08.133689 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 23:06:08.133699 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:06:08.133707 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:06:08.133715 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:06:08.133723 kernel: Fallback order for Node 0: 0 Sep 12 23:06:08.133731 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 12 23:06:08.133739 kernel: Policy zone: DMA32 Sep 12 23:06:08.133747 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:06:08.133755 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 23:06:08.133766 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 23:06:08.133774 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 23:06:08.133782 kernel: Dynamic Preempt: voluntary Sep 12 23:06:08.133790 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:06:08.133799 kernel: rcu: RCU event tracing is enabled. Sep 12 23:06:08.133807 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 23:06:08.133815 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:06:08.133823 kernel: Rude variant of Tasks RCU enabled. Sep 12 23:06:08.133839 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:06:08.133850 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:06:08.133858 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 23:06:08.133866 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:06:08.133874 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:06:08.133885 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:06:08.133894 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 23:06:08.133902 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:06:08.133909 kernel: Console: colour dummy device 80x25 Sep 12 23:06:08.133917 kernel: printk: legacy console [ttyS0] enabled Sep 12 23:06:08.133927 kernel: ACPI: Core revision 20240827 Sep 12 23:06:08.133935 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 23:06:08.133943 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 23:06:08.133951 kernel: x2apic enabled Sep 12 23:06:08.133959 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 23:06:08.133967 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 23:06:08.133975 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 23:06:08.133983 kernel: kvm-guest: setup PV IPIs Sep 12 23:06:08.133991 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 23:06:08.134002 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 23:06:08.134010 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 23:06:08.134018 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 23:06:08.134026 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 23:06:08.134034 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 23:06:08.134044 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 23:06:08.134052 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 23:06:08.134060 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 23:06:08.134068 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 23:06:08.134078 kernel: active return thunk: retbleed_return_thunk Sep 12 23:06:08.134086 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 23:06:08.134094 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 23:06:08.134102 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 23:06:08.134110 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 23:06:08.134119 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 23:06:08.134127 kernel: active return thunk: srso_return_thunk Sep 12 23:06:08.134135 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 23:06:08.134146 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 23:06:08.134154 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 23:06:08.134162 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 23:06:08.134169 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 23:06:08.134178 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 23:06:08.134186 kernel: Freeing SMP alternatives memory: 32K Sep 12 23:06:08.134194 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:06:08.134202 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 23:06:08.134209 kernel: landlock: Up and running. Sep 12 23:06:08.134220 kernel: SELinux: Initializing. Sep 12 23:06:08.134227 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:06:08.134236 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:06:08.134244 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 23:06:08.134252 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 23:06:08.134259 kernel: ... version: 0 Sep 12 23:06:08.134269 kernel: ... bit width: 48 Sep 12 23:06:08.134277 kernel: ... generic registers: 6 Sep 12 23:06:08.134285 kernel: ... value mask: 0000ffffffffffff Sep 12 23:06:08.134296 kernel: ... max period: 00007fffffffffff Sep 12 23:06:08.134304 kernel: ... fixed-purpose events: 0 Sep 12 23:06:08.134311 kernel: ... event mask: 000000000000003f Sep 12 23:06:08.134319 kernel: signal: max sigframe size: 1776 Sep 12 23:06:08.134327 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:06:08.134335 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:06:08.134343 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 23:06:08.134351 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:06:08.134359 kernel: smpboot: x86: Booting SMP configuration: Sep 12 23:06:08.134369 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 23:06:08.134377 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 23:06:08.134384 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 23:06:08.134393 kernel: Memory: 2409224K/2552216K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54084K init, 2880K bss, 137064K reserved, 0K cma-reserved) Sep 12 23:06:08.134401 kernel: devtmpfs: initialized Sep 12 23:06:08.134409 kernel: x86/mm: Memory block size: 128MB Sep 12 23:06:08.134417 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 12 23:06:08.134425 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 12 23:06:08.134444 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:06:08.134456 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 23:06:08.134476 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:06:08.134484 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:06:08.134494 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:06:08.134503 kernel: audit: type=2000 audit(1757718363.024:1): state=initialized audit_enabled=0 res=1 Sep 12 23:06:08.134513 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:06:08.134521 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 23:06:08.134529 kernel: cpuidle: using governor menu Sep 12 23:06:08.134549 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:06:08.134560 kernel: dca service started, version 1.12.1 Sep 12 23:06:08.134573 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 12 23:06:08.134581 kernel: PCI: Using configuration type 1 for base access Sep 12 23:06:08.134590 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 23:06:08.134598 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:06:08.134606 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:06:08.134614 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:06:08.134622 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:06:08.134630 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:06:08.134641 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:06:08.134649 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:06:08.134657 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:06:08.134665 kernel: ACPI: Interpreter enabled Sep 12 23:06:08.134672 kernel: ACPI: PM: (supports S0 S5) Sep 12 23:06:08.134680 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 23:06:08.134689 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 23:06:08.134697 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 23:06:08.134705 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 23:06:08.134715 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 23:06:08.135010 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:06:08.135142 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 23:06:08.135266 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 23:06:08.135277 kernel: PCI host bridge to bus 0000:00 Sep 12 23:06:08.135416 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 23:06:08.135574 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 23:06:08.135694 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 23:06:08.135806 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 12 23:06:08.135929 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 12 23:06:08.136041 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 12 23:06:08.136153 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 23:06:08.136323 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 23:06:08.136472 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 23:06:08.136657 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 12 23:06:08.136785 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 12 23:06:08.136919 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 23:06:08.137041 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 23:06:08.137183 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 23:06:08.137307 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 12 23:06:08.137437 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 12 23:06:08.137605 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 12 23:06:08.137755 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 23:06:08.137891 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 12 23:06:08.138015 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 12 23:06:08.138138 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 12 23:06:08.138306 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 23:06:08.138441 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 12 23:06:08.138584 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 12 23:06:08.138732 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 12 23:06:08.138869 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 12 23:06:08.139011 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 23:06:08.139134 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 23:06:08.139275 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 23:06:08.139398 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 12 23:06:08.139527 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 12 23:06:08.139703 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 23:06:08.139838 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 12 23:06:08.139849 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 23:06:08.139858 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 23:06:08.139871 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 23:06:08.139879 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 23:06:08.139888 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 23:06:08.139896 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 23:06:08.139904 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 23:06:08.139912 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 23:06:08.139920 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 23:06:08.139928 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 23:06:08.139936 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 23:06:08.139947 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 23:06:08.139955 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 23:06:08.139964 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 23:06:08.139972 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 23:06:08.139980 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 23:06:08.139988 kernel: iommu: Default domain type: Translated Sep 12 23:06:08.139996 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 23:06:08.140004 kernel: efivars: Registered efivars operations Sep 12 23:06:08.140012 kernel: PCI: Using ACPI for IRQ routing Sep 12 23:06:08.140023 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 23:06:08.140031 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 12 23:06:08.140040 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Sep 12 23:06:08.140048 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Sep 12 23:06:08.140055 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 12 23:06:08.140063 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 12 23:06:08.140220 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 23:06:08.140344 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 23:06:08.140467 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 23:06:08.140482 kernel: vgaarb: loaded Sep 12 23:06:08.140491 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 23:06:08.140499 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 23:06:08.140507 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 23:06:08.140515 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:06:08.140523 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:06:08.140551 kernel: pnp: PnP ACPI init Sep 12 23:06:08.140722 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 12 23:06:08.140741 kernel: pnp: PnP ACPI: found 6 devices Sep 12 23:06:08.140749 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 23:06:08.140758 kernel: NET: Registered PF_INET protocol family Sep 12 23:06:08.140766 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:06:08.140779 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:06:08.140795 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:06:08.140810 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:06:08.140819 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:06:08.140835 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:06:08.140846 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:06:08.140855 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:06:08.140864 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:06:08.140872 kernel: NET: Registered PF_XDP protocol family Sep 12 23:06:08.141004 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 12 23:06:08.141127 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 12 23:06:08.141244 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 23:06:08.141356 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 23:06:08.141472 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 23:06:08.141672 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 12 23:06:08.142063 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 12 23:06:08.142179 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 12 23:06:08.142190 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:06:08.142198 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 23:06:08.142206 kernel: Initialise system trusted keyrings Sep 12 23:06:08.142215 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:06:08.142227 kernel: Key type asymmetric registered Sep 12 23:06:08.142235 kernel: Asymmetric key parser 'x509' registered Sep 12 23:06:08.142259 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:06:08.142270 kernel: io scheduler mq-deadline registered Sep 12 23:06:08.142279 kernel: io scheduler kyber registered Sep 12 23:06:08.142287 kernel: io scheduler bfq registered Sep 12 23:06:08.142296 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 23:06:08.142305 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 23:06:08.142314 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 23:06:08.142322 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 23:06:08.142333 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:06:08.142341 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 23:06:08.142350 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 23:06:08.142360 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 23:06:08.142369 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 23:06:08.142503 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 23:06:08.142516 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 23:06:08.142654 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 23:06:08.142776 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T23:06:07 UTC (1757718367) Sep 12 23:06:08.142903 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 12 23:06:08.142914 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 23:06:08.142923 kernel: efifb: probing for efifb Sep 12 23:06:08.142932 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 12 23:06:08.142940 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 12 23:06:08.142949 kernel: efifb: scrolling: redraw Sep 12 23:06:08.142957 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 23:06:08.142969 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 23:06:08.142978 kernel: fb0: EFI VGA frame buffer device Sep 12 23:06:08.142988 kernel: pstore: Using crash dump compression: deflate Sep 12 23:06:08.142997 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 23:06:08.143005 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:06:08.143014 kernel: Segment Routing with IPv6 Sep 12 23:06:08.143025 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:06:08.143033 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:06:08.143042 kernel: Key type dns_resolver registered Sep 12 23:06:08.143050 kernel: IPI shorthand broadcast: enabled Sep 12 23:06:08.143059 kernel: sched_clock: Marking stable (4830005645, 166461182)->(5038362338, -41895511) Sep 12 23:06:08.143067 kernel: registered taskstats version 1 Sep 12 23:06:08.143076 kernel: Loading compiled-in X.509 certificates Sep 12 23:06:08.143084 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: c3297a5801573420030c321362a802da1fd49c4e' Sep 12 23:06:08.143093 kernel: Demotion targets for Node 0: null Sep 12 23:06:08.143103 kernel: Key type .fscrypt registered Sep 12 23:06:08.143112 kernel: Key type fscrypt-provisioning registered Sep 12 23:06:08.143121 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:06:08.143129 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:06:08.143138 kernel: ima: No architecture policies found Sep 12 23:06:08.143146 kernel: clk: Disabling unused clocks Sep 12 23:06:08.143154 kernel: Warning: unable to open an initial console. Sep 12 23:06:08.143163 kernel: Freeing unused kernel image (initmem) memory: 54084K Sep 12 23:06:08.143171 kernel: Write protecting the kernel read-only data: 24576k Sep 12 23:06:08.143182 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 12 23:06:08.143191 kernel: Run /init as init process Sep 12 23:06:08.143200 kernel: with arguments: Sep 12 23:06:08.143208 kernel: /init Sep 12 23:06:08.143216 kernel: with environment: Sep 12 23:06:08.143225 kernel: HOME=/ Sep 12 23:06:08.143233 kernel: TERM=linux Sep 12 23:06:08.143241 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:06:08.143255 systemd[1]: Successfully made /usr/ read-only. Sep 12 23:06:08.143270 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 23:06:08.143280 systemd[1]: Detected virtualization kvm. Sep 12 23:06:08.143289 systemd[1]: Detected architecture x86-64. Sep 12 23:06:08.143298 systemd[1]: Running in initrd. Sep 12 23:06:08.143306 systemd[1]: No hostname configured, using default hostname. Sep 12 23:06:08.143316 systemd[1]: Hostname set to . Sep 12 23:06:08.143327 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:06:08.143336 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:06:08.143345 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:06:08.143355 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:06:08.143364 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:06:08.143374 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:06:08.143383 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:06:08.143395 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:06:08.143408 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:06:08.143418 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:06:08.143427 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:06:08.143436 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:06:08.143445 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:06:08.143455 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:06:08.143465 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:06:08.143476 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:06:08.143488 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:06:08.143497 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:06:08.143506 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:06:08.143515 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 23:06:08.143524 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:06:08.143547 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:06:08.143569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:06:08.143578 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:06:08.143590 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:06:08.143599 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:06:08.143608 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:06:08.143617 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 23:06:08.143626 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:06:08.143635 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:06:08.143644 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:06:08.143653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:06:08.143662 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:06:08.143674 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:06:08.143683 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:06:08.143692 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:06:08.143818 systemd-journald[220]: Collecting audit messages is disabled. Sep 12 23:06:08.143852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:06:08.143861 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:06:08.143870 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:06:08.143880 systemd-journald[220]: Journal started Sep 12 23:06:08.143905 systemd-journald[220]: Runtime Journal (/run/log/journal/252cb7e5f4d0417da0f4cffaec41ac38) is 6M, max 48.2M, 42.2M free. Sep 12 23:06:08.131738 systemd-modules-load[221]: Inserted module 'overlay' Sep 12 23:06:08.156562 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:06:08.156619 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:06:08.162105 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:06:08.218579 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:06:08.220924 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 12 23:06:08.221563 kernel: Bridge firewalling registered Sep 12 23:06:08.223255 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:06:08.224628 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:06:08.233820 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 23:06:08.239070 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:06:08.240928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:06:08.243837 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:06:08.246743 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:06:08.252192 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:06:08.254211 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:06:08.290555 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 23:06:08.316610 systemd-resolved[262]: Positive Trust Anchors: Sep 12 23:06:08.316642 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:06:08.316679 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:06:08.320333 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 12 23:06:08.322049 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:06:08.328712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:06:08.424576 kernel: SCSI subsystem initialized Sep 12 23:06:08.436590 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:06:08.450577 kernel: iscsi: registered transport (tcp) Sep 12 23:06:08.478596 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:06:08.478689 kernel: QLogic iSCSI HBA Driver Sep 12 23:06:08.505861 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:06:08.534836 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:06:08.535343 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:06:08.607144 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:06:08.633879 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:06:08.731590 kernel: raid6: avx2x4 gen() 28570 MB/s Sep 12 23:06:08.748583 kernel: raid6: avx2x2 gen() 28729 MB/s Sep 12 23:06:08.765804 kernel: raid6: avx2x1 gen() 24867 MB/s Sep 12 23:06:08.765872 kernel: raid6: using algorithm avx2x2 gen() 28729 MB/s Sep 12 23:06:08.789771 kernel: raid6: .... xor() 15681 MB/s, rmw enabled Sep 12 23:06:08.789865 kernel: raid6: using avx2x2 recovery algorithm Sep 12 23:06:08.815661 kernel: xor: automatically using best checksumming function avx Sep 12 23:06:09.107584 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:06:09.118446 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:06:09.120684 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:06:09.160137 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 12 23:06:09.166228 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:06:09.171698 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:06:09.206077 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Sep 12 23:06:09.243877 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:06:09.245726 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:06:09.351335 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:06:09.355373 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:06:09.422586 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 23:06:09.428993 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 23:06:09.434820 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:06:09.434867 kernel: GPT:9289727 != 19775487 Sep 12 23:06:09.434882 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:06:09.434896 kernel: GPT:9289727 != 19775487 Sep 12 23:06:09.434909 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:06:09.434922 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:06:09.434937 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 23:06:09.461572 kernel: libata version 3.00 loaded. Sep 12 23:06:09.466892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:06:09.472643 kernel: AES CTR mode by8 optimization enabled Sep 12 23:06:09.467053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:06:09.472514 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:06:09.478462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:06:09.482298 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 23:06:09.482757 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 23:06:09.484102 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 23:06:09.487058 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 23:06:09.493549 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 23:06:09.493752 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 23:06:09.493934 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 23:06:09.516164 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 23:06:09.633884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:06:09.641527 kernel: scsi host0: ahci Sep 12 23:06:09.647883 kernel: scsi host1: ahci Sep 12 23:06:09.648225 kernel: scsi host2: ahci Sep 12 23:06:09.648466 kernel: scsi host3: ahci Sep 12 23:06:09.649657 kernel: scsi host4: ahci Sep 12 23:06:09.649870 kernel: scsi host5: ahci Sep 12 23:06:09.650106 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 12 23:06:09.634017 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:06:09.659994 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 12 23:06:09.660029 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 12 23:06:09.660060 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 12 23:06:09.660074 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 12 23:06:09.660088 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 12 23:06:09.678824 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 23:06:09.692159 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:06:09.703595 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 23:06:09.707580 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 23:06:09.712477 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:06:09.715397 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:06:09.764099 disk-uuid[620]: Primary Header is updated. Sep 12 23:06:09.764099 disk-uuid[620]: Secondary Entries is updated. Sep 12 23:06:09.764099 disk-uuid[620]: Secondary Header is updated. Sep 12 23:06:09.769583 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:06:09.775596 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:06:09.795876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:06:09.959317 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 23:06:09.959412 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 23:06:09.959444 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 23:06:09.960574 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 23:06:09.961586 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 23:06:09.962576 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 23:06:09.962602 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 23:06:09.963937 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 23:06:09.963962 kernel: ata3.00: applying bridge limits Sep 12 23:06:09.964580 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 23:06:09.965893 kernel: ata3.00: configured for UDMA/100 Sep 12 23:06:09.966575 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 23:06:10.031603 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 23:06:10.032069 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 23:06:10.059584 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 23:06:10.471331 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:06:10.474634 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:06:10.477577 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:06:10.480201 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:06:10.483829 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:06:10.514440 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:06:10.784184 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:06:10.786048 disk-uuid[624]: The operation has completed successfully. Sep 12 23:06:10.845335 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:06:10.845521 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:06:10.925063 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:06:10.945882 sh[662]: Success Sep 12 23:06:10.984899 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:06:10.985005 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:06:10.988443 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 23:06:11.059777 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 23:06:11.141140 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:06:11.167641 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:06:11.169267 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:06:11.187752 kernel: BTRFS: device fsid 5d2ab445-1154-4e47-9d7e-ff4b81d84474 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (674) Sep 12 23:06:11.191226 kernel: BTRFS info (device dm-0): first mount of filesystem 5d2ab445-1154-4e47-9d7e-ff4b81d84474 Sep 12 23:06:11.191271 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:06:11.200919 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:06:11.201008 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 23:06:11.202998 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:06:11.203813 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 23:06:11.206578 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:06:11.207800 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:06:11.212722 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:06:11.259893 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (707) Sep 12 23:06:11.265408 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:06:11.265505 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:06:11.276563 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:06:11.276659 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:06:11.290649 kernel: BTRFS info (device vda6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:06:11.300216 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:06:11.311334 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:06:11.428310 ignition[758]: Ignition 2.22.0 Sep 12 23:06:11.428323 ignition[758]: Stage: fetch-offline Sep 12 23:06:11.428392 ignition[758]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:06:11.428402 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:06:11.428505 ignition[758]: parsed url from cmdline: "" Sep 12 23:06:11.428511 ignition[758]: no config URL provided Sep 12 23:06:11.428517 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:06:11.428526 ignition[758]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:06:11.428573 ignition[758]: op(1): [started] loading QEMU firmware config module Sep 12 23:06:11.428582 ignition[758]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 23:06:11.441775 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:06:11.445697 ignition[758]: op(1): [finished] loading QEMU firmware config module Sep 12 23:06:11.452135 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:06:11.489012 ignition[758]: parsing config with SHA512: 68797c764bcc477e26e25af0dd17866b6ccb273c848a9d8a79d82f437439c7003745f4aa4aeddcd5ebf3d1653c1458d0a4b9a87740b36f348e3ef6532d4d531d Sep 12 23:06:11.494772 unknown[758]: fetched base config from "system" Sep 12 23:06:11.495362 ignition[758]: fetch-offline: fetch-offline passed Sep 12 23:06:11.494791 unknown[758]: fetched user config from "qemu" Sep 12 23:06:11.495436 ignition[758]: Ignition finished successfully Sep 12 23:06:11.499074 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:06:11.533048 systemd-networkd[852]: lo: Link UP Sep 12 23:06:11.533062 systemd-networkd[852]: lo: Gained carrier Sep 12 23:06:11.536784 systemd-networkd[852]: Enumeration completed Sep 12 23:06:11.537067 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:06:11.540191 systemd[1]: Reached target network.target - Network. Sep 12 23:06:11.540301 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 23:06:11.541793 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:06:11.546307 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:06:11.546316 systemd-networkd[852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:06:11.550894 systemd-networkd[852]: eth0: Link UP Sep 12 23:06:11.552750 systemd-networkd[852]: eth0: Gained carrier Sep 12 23:06:11.552774 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:06:11.570614 systemd-networkd[852]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:06:11.613125 ignition[856]: Ignition 2.22.0 Sep 12 23:06:11.613140 ignition[856]: Stage: kargs Sep 12 23:06:11.613287 ignition[856]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:06:11.613299 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:06:11.614089 ignition[856]: kargs: kargs passed Sep 12 23:06:11.614138 ignition[856]: Ignition finished successfully Sep 12 23:06:11.619207 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:06:11.622281 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:06:11.670621 ignition[865]: Ignition 2.22.0 Sep 12 23:06:11.670636 ignition[865]: Stage: disks Sep 12 23:06:11.670835 ignition[865]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:06:11.670849 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:06:11.672043 ignition[865]: disks: disks passed Sep 12 23:06:11.675472 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:06:11.672096 ignition[865]: Ignition finished successfully Sep 12 23:06:11.676990 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:06:11.678787 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:06:11.679003 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:06:11.679358 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:06:11.679919 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:06:11.687676 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:06:11.732918 systemd-resolved[262]: Detected conflict on linux IN A 10.0.0.139 Sep 12 23:06:11.732947 systemd-resolved[262]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Sep 12 23:06:11.745013 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 23:06:11.770777 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:06:11.793837 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:06:12.181591 kernel: EXT4-fs (vda9): mounted filesystem d027afc5-396a-49bf-a5be-60ddd42cb089 r/w with ordered data mode. Quota mode: none. Sep 12 23:06:12.182415 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:06:12.183263 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:06:12.191850 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:06:12.196082 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:06:12.197721 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 23:06:12.197805 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:06:12.197845 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:06:12.220915 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:06:12.227185 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:06:12.232936 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Sep 12 23:06:12.237810 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:06:12.237880 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:06:12.250415 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:06:12.250512 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:06:12.262152 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:06:12.399444 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:06:12.430550 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:06:12.460736 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:06:12.478093 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:06:12.786052 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:06:12.790723 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:06:12.798999 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:06:12.812831 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:06:12.816734 kernel: BTRFS info (device vda6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:06:12.866879 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:06:12.894071 ignition[996]: INFO : Ignition 2.22.0 Sep 12 23:06:12.894071 ignition[996]: INFO : Stage: mount Sep 12 23:06:12.898865 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:06:12.898865 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:06:12.898865 ignition[996]: INFO : mount: mount passed Sep 12 23:06:12.898865 ignition[996]: INFO : Ignition finished successfully Sep 12 23:06:12.906755 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:06:12.916629 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:06:12.983983 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:06:13.031882 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Sep 12 23:06:13.035948 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:06:13.036001 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:06:13.055049 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:06:13.055148 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:06:13.058633 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:06:13.151915 ignition[1026]: INFO : Ignition 2.22.0 Sep 12 23:06:13.151915 ignition[1026]: INFO : Stage: files Sep 12 23:06:13.165830 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:06:13.165830 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:06:13.165830 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:06:13.165830 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:06:13.165830 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:06:13.194670 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:06:13.194670 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:06:13.194670 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:06:13.191596 unknown[1026]: wrote ssh authorized keys file for user: core Sep 12 23:06:13.226276 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 23:06:13.226276 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 23:06:13.285983 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:06:13.387223 systemd-networkd[852]: eth0: Gained IPv6LL Sep 12 23:06:13.520810 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 23:06:13.520810 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 23:06:13.520810 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 23:06:13.660097 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 23:06:14.200907 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 23:06:14.200907 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:06:14.212946 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:06:14.212946 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:06:14.212946 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:06:14.212946 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:06:14.212946 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:06:14.212946 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:06:14.212946 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:06:14.242948 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:06:14.245939 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:06:14.245939 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 23:06:14.253644 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 23:06:14.253644 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 23:06:14.259339 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 23:06:14.518439 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 23:06:16.155474 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 23:06:16.155474 ignition[1026]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 23:06:16.160620 ignition[1026]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:06:16.696416 ignition[1026]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:06:16.696416 ignition[1026]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 23:06:16.696416 ignition[1026]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 23:06:16.696416 ignition[1026]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:06:16.696416 ignition[1026]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:06:16.696416 ignition[1026]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 23:06:16.696416 ignition[1026]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 23:06:16.751404 ignition[1026]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:06:16.770785 ignition[1026]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:06:16.773282 ignition[1026]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 23:06:16.773282 ignition[1026]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:06:16.777139 ignition[1026]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:06:16.777139 ignition[1026]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:06:16.777139 ignition[1026]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:06:16.777139 ignition[1026]: INFO : files: files passed Sep 12 23:06:16.777139 ignition[1026]: INFO : Ignition finished successfully Sep 12 23:06:16.796156 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:06:16.800311 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:06:16.803378 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:06:16.829276 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:06:16.829414 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:06:16.833617 initrd-setup-root-after-ignition[1055]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 23:06:16.835509 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:06:16.835509 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:06:16.840004 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:06:16.843790 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:06:16.857846 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:06:16.860522 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:06:16.951109 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:06:16.951260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:06:16.952586 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:06:16.957125 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:06:16.957376 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:06:16.958839 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:06:16.990972 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:06:16.993084 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:06:17.029345 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:06:17.030939 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:06:17.033868 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:06:17.037621 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:06:17.037869 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:06:17.042863 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:06:17.044420 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:06:17.046971 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:06:17.049867 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:06:17.052226 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:06:17.055759 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 23:06:17.058861 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:06:17.061614 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:06:17.064999 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:06:17.065227 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:06:17.068614 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:06:17.069904 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:06:17.070149 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:06:17.074905 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:06:17.076132 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:06:17.077348 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:06:17.079702 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:06:17.080038 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:06:17.080247 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:06:17.084471 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:06:17.084693 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:06:17.087404 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:06:17.091677 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:06:17.096735 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:06:17.097011 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:06:17.101316 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:06:17.102306 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:06:17.102465 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:06:17.104749 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:06:17.104942 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:06:17.106700 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:06:17.106963 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:06:17.108985 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:06:17.109118 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:06:17.114238 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:06:17.120742 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:06:17.123597 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:06:17.123917 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:06:17.128903 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:06:17.129118 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:06:17.137994 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:06:17.138169 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:06:17.160107 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:06:17.165879 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:06:17.166061 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:06:17.173292 ignition[1081]: INFO : Ignition 2.22.0 Sep 12 23:06:17.173292 ignition[1081]: INFO : Stage: umount Sep 12 23:06:17.175511 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:06:17.175511 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:06:17.178605 ignition[1081]: INFO : umount: umount passed Sep 12 23:06:17.178605 ignition[1081]: INFO : Ignition finished successfully Sep 12 23:06:17.181421 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:06:17.181802 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:06:17.185797 systemd[1]: Stopped target network.target - Network. Sep 12 23:06:17.187530 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:06:17.187684 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:06:17.188871 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:06:17.189054 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:06:17.192342 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:06:17.192410 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:06:17.194500 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:06:17.194600 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:06:17.195853 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:06:17.195933 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:06:17.196468 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:06:17.200398 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:06:17.213091 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:06:17.213268 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:06:17.219097 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 23:06:17.219588 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:06:17.219654 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:06:17.224901 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 23:06:17.225248 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:06:17.225424 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:06:17.231461 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 23:06:17.232445 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 23:06:17.235098 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:06:17.235196 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:06:17.239518 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:06:17.241721 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:06:17.241796 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:06:17.244350 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:06:17.244407 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:06:17.248002 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:06:17.248055 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:06:17.249231 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:06:17.252949 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 23:06:17.270061 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:06:17.275796 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:06:17.277584 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:06:17.277645 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:06:17.279960 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:06:17.280021 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:06:17.282122 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:06:17.282220 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:06:17.285438 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:06:17.285495 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:06:17.288311 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:06:17.288382 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:06:17.292591 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:06:17.293463 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 23:06:17.293528 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:06:17.298399 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:06:17.298477 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:06:17.302183 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 23:06:17.302255 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:06:17.307020 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:06:17.307098 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:06:17.309017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:06:17.309098 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:06:17.316683 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:06:17.319428 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:06:17.331048 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:06:17.331227 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:06:17.332762 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:06:17.337348 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:06:17.368498 systemd[1]: Switching root. Sep 12 23:06:17.415895 systemd-journald[220]: Journal stopped Sep 12 23:06:18.932524 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 12 23:06:18.932622 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:06:18.932643 kernel: SELinux: policy capability open_perms=1 Sep 12 23:06:18.932667 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:06:18.932683 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:06:18.932698 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:06:18.932714 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:06:18.932740 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:06:18.932756 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:06:18.932772 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 23:06:18.932795 kernel: audit: type=1403 audit(1757718377.809:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:06:18.932819 systemd[1]: Successfully loaded SELinux policy in 67.712ms. Sep 12 23:06:18.932849 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.511ms. Sep 12 23:06:18.932868 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 23:06:18.932885 systemd[1]: Detected virtualization kvm. Sep 12 23:06:18.932905 systemd[1]: Detected architecture x86-64. Sep 12 23:06:18.932922 systemd[1]: Detected first boot. Sep 12 23:06:18.932947 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:06:18.932964 zram_generator::config[1126]: No configuration found. Sep 12 23:06:18.932981 kernel: Guest personality initialized and is inactive Sep 12 23:06:18.932997 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 23:06:18.933013 kernel: Initialized host personality Sep 12 23:06:18.933034 kernel: NET: Registered PF_VSOCK protocol family Sep 12 23:06:18.933049 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:06:18.933074 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 23:06:18.933090 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:06:18.933107 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:06:18.933123 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:06:18.933140 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:06:18.933155 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:06:18.933171 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:06:18.933187 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:06:18.933203 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:06:18.933223 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:06:18.933239 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:06:18.933255 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:06:18.933271 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:06:18.933287 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:06:18.933310 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:06:18.933326 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:06:18.933359 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:06:18.933384 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:06:18.933401 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 23:06:18.933418 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:06:18.933433 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:06:18.933450 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:06:18.933466 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:06:18.933483 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:06:18.933500 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:06:18.933560 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:06:18.933581 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:06:18.933598 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:06:18.933614 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:06:18.933631 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:06:18.933649 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:06:18.933666 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 23:06:18.933683 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:06:18.933700 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:06:18.933720 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:06:18.933737 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:06:18.933761 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:06:18.933777 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:06:18.933794 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:06:18.933812 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:06:18.933828 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:06:18.933843 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:06:18.933859 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:06:18.933880 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:06:18.933897 systemd[1]: Reached target machines.target - Containers. Sep 12 23:06:18.933915 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:06:18.933931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:06:18.933947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:06:18.933964 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:06:18.933980 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:06:18.933997 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:06:18.934019 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:06:18.934036 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:06:18.934052 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:06:18.934069 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:06:18.934086 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:06:18.934103 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:06:18.934128 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:06:18.934145 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:06:18.934163 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:06:18.934185 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:06:18.934202 kernel: fuse: init (API version 7.41) Sep 12 23:06:18.934219 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:06:18.934237 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:06:18.934254 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:06:18.934276 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 23:06:18.934294 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:06:18.934310 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:06:18.934328 systemd[1]: Stopped verity-setup.service. Sep 12 23:06:18.934344 kernel: ACPI: bus type drm_connector registered Sep 12 23:06:18.934393 systemd-journald[1197]: Collecting audit messages is disabled. Sep 12 23:06:18.934427 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:06:18.934450 kernel: loop: module loaded Sep 12 23:06:18.934470 systemd-journald[1197]: Journal started Sep 12 23:06:18.934501 systemd-journald[1197]: Runtime Journal (/run/log/journal/252cb7e5f4d0417da0f4cffaec41ac38) is 6M, max 48.2M, 42.2M free. Sep 12 23:06:18.652935 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:06:18.670057 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 23:06:18.670618 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:06:18.938575 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:06:18.941446 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:06:18.942918 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:06:18.944382 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:06:18.945707 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:06:18.948345 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:06:18.949693 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:06:18.951098 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:06:18.952873 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:06:18.954648 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:06:18.954883 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:06:18.956554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:06:18.956805 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:06:18.958926 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:06:18.959158 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:06:18.960584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:06:18.960847 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:06:18.962473 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:06:18.962814 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:06:18.964261 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:06:18.964762 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:06:18.966289 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:06:18.967812 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:06:18.969433 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:06:18.971279 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 23:06:18.992519 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:06:18.996069 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:06:18.999685 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:06:19.001238 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:06:19.001286 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:06:19.003995 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 23:06:19.009689 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:06:19.011433 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:06:19.014125 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:06:19.018686 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:06:19.020560 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:06:19.023716 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:06:19.025094 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:06:19.028770 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:06:19.036613 systemd-journald[1197]: Time spent on flushing to /var/log/journal/252cb7e5f4d0417da0f4cffaec41ac38 is 40.763ms for 1046 entries. Sep 12 23:06:19.036613 systemd-journald[1197]: System Journal (/var/log/journal/252cb7e5f4d0417da0f4cffaec41ac38) is 8M, max 195.6M, 187.6M free. Sep 12 23:06:19.083694 systemd-journald[1197]: Received client request to flush runtime journal. Sep 12 23:06:19.083741 kernel: loop0: detected capacity change from 0 to 128016 Sep 12 23:06:19.033633 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:06:19.044357 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:06:19.049033 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:06:19.050652 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:06:19.052190 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:06:19.073027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:06:19.087932 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:06:19.091674 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:06:19.094376 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:06:19.096942 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Sep 12 23:06:19.096965 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Sep 12 23:06:19.099666 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 23:06:19.103673 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:06:19.107447 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:06:19.123388 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:06:19.137317 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 23:06:19.148612 kernel: loop1: detected capacity change from 0 to 110984 Sep 12 23:06:19.161215 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:06:19.167787 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:06:19.183986 kernel: loop2: detected capacity change from 0 to 224512 Sep 12 23:06:19.194428 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 12 23:06:19.195462 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 12 23:06:19.203130 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:06:19.241582 kernel: loop3: detected capacity change from 0 to 128016 Sep 12 23:06:19.253592 kernel: loop4: detected capacity change from 0 to 110984 Sep 12 23:06:19.268909 kernel: loop5: detected capacity change from 0 to 224512 Sep 12 23:06:19.347611 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 23:06:19.348395 (sd-merge)[1272]: Merged extensions into '/usr'. Sep 12 23:06:19.354389 systemd[1]: Reload requested from client PID 1245 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:06:19.354407 systemd[1]: Reloading... Sep 12 23:06:19.490594 zram_generator::config[1298]: No configuration found. Sep 12 23:06:19.763632 ldconfig[1240]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:06:19.822079 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:06:19.822649 systemd[1]: Reloading finished in 467 ms. Sep 12 23:06:19.858975 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:06:19.877317 systemd[1]: Starting ensure-sysext.service... Sep 12 23:06:19.880061 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:06:19.976734 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:06:19.982266 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:06:19.982427 systemd[1]: Reloading... Sep 12 23:06:19.984849 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 23:06:19.985336 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 23:06:19.985758 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:06:19.986044 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:06:19.987068 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:06:19.987379 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Sep 12 23:06:19.987507 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Sep 12 23:06:19.992368 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:06:19.992380 systemd-tmpfiles[1335]: Skipping /boot Sep 12 23:06:20.003760 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:06:20.003773 systemd-tmpfiles[1335]: Skipping /boot Sep 12 23:06:20.062735 zram_generator::config[1363]: No configuration found. Sep 12 23:06:20.328493 systemd[1]: Reloading finished in 345 ms. Sep 12 23:06:20.350832 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:06:20.352892 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:06:20.399851 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:06:20.402853 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:06:20.405685 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:06:20.419891 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:06:20.423770 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:06:20.428048 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:06:20.433261 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:06:20.433553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:06:20.436745 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:06:20.445237 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:06:20.449175 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:06:20.450483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:06:20.450694 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:06:20.450789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:06:20.452031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:06:20.452307 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:06:20.466884 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:06:20.469055 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:06:20.473901 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:06:20.474278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:06:20.478960 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:06:20.482769 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:06:20.496419 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:06:20.499437 systemd-udevd[1407]: Using default interface naming scheme 'v255'. Sep 12 23:06:20.504997 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:06:20.505219 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:06:20.506662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:06:20.509928 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:06:20.515382 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:06:20.518192 augenrules[1440]: No rules Sep 12 23:06:20.523861 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:06:20.525167 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:06:20.525231 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:06:20.527221 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:06:20.528441 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:06:20.529740 systemd[1]: Finished ensure-sysext.service. Sep 12 23:06:20.531260 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:06:20.532649 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:06:20.534741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:06:20.534981 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:06:20.536460 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:06:20.536729 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:06:20.538433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:06:20.541015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:06:20.547123 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:06:20.547483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:06:20.552444 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:06:20.559995 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:06:20.562915 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:06:20.565622 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:06:20.572941 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:06:20.574308 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:06:20.574398 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:06:20.578136 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 23:06:20.579608 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:06:20.689408 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 23:06:20.826080 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:06:20.829561 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 12 23:06:20.831903 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:06:20.837588 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 23:06:20.843585 kernel: ACPI: button: Power Button [PWRF] Sep 12 23:06:20.865191 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:06:20.933882 systemd-networkd[1465]: lo: Link UP Sep 12 23:06:20.934388 systemd-networkd[1465]: lo: Gained carrier Sep 12 23:06:20.938499 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 23:06:20.938695 systemd-networkd[1465]: Enumeration completed Sep 12 23:06:20.939264 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:06:20.939347 systemd-networkd[1465]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:06:20.940419 systemd-networkd[1465]: eth0: Link UP Sep 12 23:06:20.940467 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:06:20.941317 systemd-networkd[1465]: eth0: Gained carrier Sep 12 23:06:20.941408 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:06:20.942267 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:06:20.946807 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 23:06:20.951133 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:06:20.956636 systemd-networkd[1465]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:06:20.959347 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Sep 12 23:06:20.966083 systemd-timesyncd[1469]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 23:06:20.966206 systemd-timesyncd[1469]: Initial clock synchronization to Fri 2025-09-12 23:06:21.170300 UTC. Sep 12 23:06:20.970474 systemd-resolved[1405]: Positive Trust Anchors: Sep 12 23:06:20.970498 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:06:20.970572 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:06:20.975878 systemd-resolved[1405]: Defaulting to hostname 'linux'. Sep 12 23:06:20.978359 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:06:20.979883 systemd[1]: Reached target network.target - Network. Sep 12 23:06:20.981000 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:06:20.982574 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:06:20.984006 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:06:20.985585 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:06:20.987216 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 23:06:20.989129 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:06:20.990617 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:06:20.992154 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:06:20.993701 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:06:20.993748 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:06:20.994871 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:06:20.997016 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:06:21.000992 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:06:21.005366 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 23:06:21.007311 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 23:06:21.008992 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 23:06:21.016763 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:06:21.018659 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 23:06:21.021495 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 23:06:21.026356 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 23:06:21.030133 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 23:06:21.030379 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 23:06:21.025169 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:06:21.029905 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:06:21.031298 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:06:21.066866 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:06:21.066911 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:06:21.070452 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:06:21.075760 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:06:21.078863 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:06:21.082989 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:06:21.106732 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:06:21.108714 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:06:21.114233 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 23:06:21.115229 jq[1522]: false Sep 12 23:06:21.120273 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:06:21.125661 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:06:21.175903 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:06:21.179736 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:06:21.192063 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:06:21.199489 oslogin_cache_refresh[1528]: Refreshing passwd entry cache Sep 12 23:06:21.200940 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Refreshing passwd entry cache Sep 12 23:06:21.196722 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:06:21.197709 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:06:21.199845 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:06:21.210863 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:06:21.222178 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:06:21.230749 jq[1545]: true Sep 12 23:06:21.224527 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:06:21.231131 extend-filesystems[1527]: Found /dev/vda6 Sep 12 23:06:21.224905 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:06:21.247919 oslogin_cache_refresh[1528]: Failure getting users, quitting Sep 12 23:06:21.250152 extend-filesystems[1527]: Found /dev/vda9 Sep 12 23:06:21.251777 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Failure getting users, quitting Sep 12 23:06:21.251777 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 23:06:21.251777 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Refreshing group entry cache Sep 12 23:06:21.239742 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:06:21.247948 oslogin_cache_refresh[1528]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 23:06:21.240735 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:06:21.248039 oslogin_cache_refresh[1528]: Refreshing group entry cache Sep 12 23:06:21.260081 extend-filesystems[1527]: Checking size of /dev/vda9 Sep 12 23:06:21.262044 (ntainerd)[1553]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:06:21.273863 jq[1548]: true Sep 12 23:06:21.271008 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 23:06:21.276012 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Failure getting groups, quitting Sep 12 23:06:21.276012 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 23:06:21.264407 oslogin_cache_refresh[1528]: Failure getting groups, quitting Sep 12 23:06:21.276162 update_engine[1544]: I20250912 23:06:21.264242 1544 main.cc:92] Flatcar Update Engine starting Sep 12 23:06:21.271648 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 23:06:21.264426 oslogin_cache_refresh[1528]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 23:06:21.282857 extend-filesystems[1527]: Resized partition /dev/vda9 Sep 12 23:06:21.362898 kernel: kvm_amd: TSC scaling supported Sep 12 23:06:21.362956 kernel: kvm_amd: Nested Virtualization enabled Sep 12 23:06:21.362972 kernel: kvm_amd: Nested Paging enabled Sep 12 23:06:21.362986 kernel: kvm_amd: LBR virtualization supported Sep 12 23:06:21.363000 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 23:06:21.363019 kernel: kvm_amd: Virtual GIF supported Sep 12 23:06:21.361867 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:06:21.363187 extend-filesystems[1566]: resize2fs 1.47.3 (8-Jul-2025) Sep 12 23:06:21.362226 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:06:21.373869 tar[1547]: linux-amd64/LICENSE Sep 12 23:06:21.374086 tar[1547]: linux-amd64/helm Sep 12 23:06:21.382719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:06:21.530507 dbus-daemon[1518]: [system] SELinux support is enabled Sep 12 23:06:21.534247 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:06:21.553779 update_engine[1544]: I20250912 23:06:21.541854 1544 update_check_scheduler.cc:74] Next update check in 2m49s Sep 12 23:06:21.554379 systemd-logind[1543]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 23:06:21.554420 systemd-logind[1543]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 23:06:21.554731 systemd-logind[1543]: New seat seat0. Sep 12 23:06:21.563726 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:06:21.565128 dbus-daemon[1518]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 23:06:21.564278 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:06:21.564302 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:06:21.564634 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:06:21.564652 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:06:21.565058 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:06:21.568756 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:06:21.601782 kernel: EDAC MC: Ver: 3.0.0 Sep 12 23:06:21.601844 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 23:06:21.972430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:06:21.977999 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 23:06:21.994628 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:06:22.011163 extend-filesystems[1566]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 23:06:22.011163 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 23:06:22.011163 extend-filesystems[1566]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 23:06:22.016272 extend-filesystems[1527]: Resized filesystem in /dev/vda9 Sep 12 23:06:22.017405 bash[1587]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:06:22.017580 tar[1547]: linux-amd64/README.md Sep 12 23:06:22.019858 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:06:22.020900 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:06:22.023454 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:06:22.027056 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:06:22.031310 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 23:06:22.130750 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:06:22.137357 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:06:22.140489 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:06:22.169640 containerd[1553]: time="2025-09-12T23:06:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 23:06:22.171103 containerd[1553]: time="2025-09-12T23:06:22.170509175Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 23:06:22.171289 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:06:22.171683 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:06:22.184152 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:06:22.248272 containerd[1553]: time="2025-09-12T23:06:22.248065228Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.574µs" Sep 12 23:06:22.248272 containerd[1553]: time="2025-09-12T23:06:22.248139775Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 23:06:22.248272 containerd[1553]: time="2025-09-12T23:06:22.248170678Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 23:06:22.248513 containerd[1553]: time="2025-09-12T23:06:22.248456254Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 23:06:22.248513 containerd[1553]: time="2025-09-12T23:06:22.248473052Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 23:06:22.248513 containerd[1553]: time="2025-09-12T23:06:22.248514207Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 23:06:22.248666 containerd[1553]: time="2025-09-12T23:06:22.248625392Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 23:06:22.248666 containerd[1553]: time="2025-09-12T23:06:22.248643491Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 23:06:22.249056 containerd[1553]: time="2025-09-12T23:06:22.249019254Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 23:06:22.249056 containerd[1553]: time="2025-09-12T23:06:22.249038532Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 23:06:22.249056 containerd[1553]: time="2025-09-12T23:06:22.249051673Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 23:06:22.249162 containerd[1553]: time="2025-09-12T23:06:22.249063329Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 23:06:22.249207 containerd[1553]: time="2025-09-12T23:06:22.249184581Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 23:06:22.249529 containerd[1553]: time="2025-09-12T23:06:22.249490460Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 23:06:22.249589 containerd[1553]: time="2025-09-12T23:06:22.249538017Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 23:06:22.249589 containerd[1553]: time="2025-09-12T23:06:22.249567740Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 23:06:22.249999 containerd[1553]: time="2025-09-12T23:06:22.249948656Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 23:06:22.250664 containerd[1553]: time="2025-09-12T23:06:22.250577886Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 23:06:22.250886 containerd[1553]: time="2025-09-12T23:06:22.250853958Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:06:22.259323 containerd[1553]: time="2025-09-12T23:06:22.259256649Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 23:06:22.259428 containerd[1553]: time="2025-09-12T23:06:22.259359681Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 23:06:22.259428 containerd[1553]: time="2025-09-12T23:06:22.259387541Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 23:06:22.259428 containerd[1553]: time="2025-09-12T23:06:22.259410945Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 23:06:22.259505 containerd[1553]: time="2025-09-12T23:06:22.259443220Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 23:06:22.259505 containerd[1553]: time="2025-09-12T23:06:22.259461053Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 23:06:22.259505 containerd[1553]: time="2025-09-12T23:06:22.259494290Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 23:06:22.259666 containerd[1553]: time="2025-09-12T23:06:22.259515042Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 23:06:22.259666 containerd[1553]: time="2025-09-12T23:06:22.259535651Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 23:06:22.259666 containerd[1553]: time="2025-09-12T23:06:22.259575095Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 23:06:22.259666 containerd[1553]: time="2025-09-12T23:06:22.259590695Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 23:06:22.259666 containerd[1553]: time="2025-09-12T23:06:22.259610965Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 23:06:22.259872 containerd[1553]: time="2025-09-12T23:06:22.259846097Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 23:06:22.259911 containerd[1553]: time="2025-09-12T23:06:22.259888656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 23:06:22.259939 containerd[1553]: time="2025-09-12T23:06:22.259908332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 23:06:22.259978 containerd[1553]: time="2025-09-12T23:06:22.259937482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 23:06:22.259978 containerd[1553]: time="2025-09-12T23:06:22.259954168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 23:06:22.259978 containerd[1553]: time="2025-09-12T23:06:22.259967872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 23:06:22.260069 containerd[1553]: time="2025-09-12T23:06:22.259982530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 23:06:22.260069 containerd[1553]: time="2025-09-12T23:06:22.260012663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 23:06:22.260069 containerd[1553]: time="2025-09-12T23:06:22.260030731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 23:06:22.260069 containerd[1553]: time="2025-09-12T23:06:22.260046229Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 23:06:22.260176 containerd[1553]: time="2025-09-12T23:06:22.260073392Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 23:06:22.260259 containerd[1553]: time="2025-09-12T23:06:22.260232626Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 23:06:22.260306 containerd[1553]: time="2025-09-12T23:06:22.260263917Z" level=info msg="Start snapshots syncer" Sep 12 23:06:22.260335 containerd[1553]: time="2025-09-12T23:06:22.260307253Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 23:06:22.260784 containerd[1553]: time="2025-09-12T23:06:22.260723394Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 23:06:22.260998 containerd[1553]: time="2025-09-12T23:06:22.260820649Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 23:06:22.260998 containerd[1553]: time="2025-09-12T23:06:22.260939802Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 23:06:22.261109 containerd[1553]: time="2025-09-12T23:06:22.261083323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 23:06:22.261141 containerd[1553]: time="2025-09-12T23:06:22.261115720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 23:06:22.261141 containerd[1553]: time="2025-09-12T23:06:22.261132836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 23:06:22.261191 containerd[1553]: time="2025-09-12T23:06:22.261146674Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 23:06:22.261191 containerd[1553]: time="2025-09-12T23:06:22.261161669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 23:06:22.261191 containerd[1553]: time="2025-09-12T23:06:22.261177207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 23:06:22.261265 containerd[1553]: time="2025-09-12T23:06:22.261195767Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 23:06:22.261265 containerd[1553]: time="2025-09-12T23:06:22.261228053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 23:06:22.261265 containerd[1553]: time="2025-09-12T23:06:22.261243047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 23:06:22.261265 containerd[1553]: time="2025-09-12T23:06:22.261257162Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 23:06:22.261373 containerd[1553]: time="2025-09-12T23:06:22.261308970Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 23:06:22.261373 containerd[1553]: time="2025-09-12T23:06:22.261334556Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 23:06:22.261373 containerd[1553]: time="2025-09-12T23:06:22.261347758Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 23:06:22.261373 containerd[1553]: time="2025-09-12T23:06:22.261360654Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 23:06:22.261373 containerd[1553]: time="2025-09-12T23:06:22.261372136Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 23:06:22.261517 containerd[1553]: time="2025-09-12T23:06:22.261386394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 23:06:22.261517 containerd[1553]: time="2025-09-12T23:06:22.261402321Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 23:06:22.261517 containerd[1553]: time="2025-09-12T23:06:22.261436379Z" level=info msg="runtime interface created" Sep 12 23:06:22.261517 containerd[1553]: time="2025-09-12T23:06:22.261444921Z" level=info msg="created NRI interface" Sep 12 23:06:22.261517 containerd[1553]: time="2025-09-12T23:06:22.261456588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 23:06:22.261517 containerd[1553]: time="2025-09-12T23:06:22.261484652Z" level=info msg="Connect containerd service" Sep 12 23:06:22.261756 containerd[1553]: time="2025-09-12T23:06:22.261524281Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:06:22.262748 containerd[1553]: time="2025-09-12T23:06:22.262716122Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:06:22.263147 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:06:22.266486 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:06:22.271348 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 23:06:22.273190 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:06:22.503615 containerd[1553]: time="2025-09-12T23:06:22.503419967Z" level=info msg="Start subscribing containerd event" Sep 12 23:06:22.503741 containerd[1553]: time="2025-09-12T23:06:22.503520745Z" level=info msg="Start recovering state" Sep 12 23:06:22.503845 containerd[1553]: time="2025-09-12T23:06:22.503812878Z" level=info msg="Start event monitor" Sep 12 23:06:22.503883 containerd[1553]: time="2025-09-12T23:06:22.503819986Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:06:22.503914 containerd[1553]: time="2025-09-12T23:06:22.503856941Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:06:22.503991 containerd[1553]: time="2025-09-12T23:06:22.503954175Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:06:22.504037 containerd[1553]: time="2025-09-12T23:06:22.503966405Z" level=info msg="Start streaming server" Sep 12 23:06:22.504092 containerd[1553]: time="2025-09-12T23:06:22.504056407Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 23:06:22.504092 containerd[1553]: time="2025-09-12T23:06:22.504066763Z" level=info msg="runtime interface starting up..." Sep 12 23:06:22.504092 containerd[1553]: time="2025-09-12T23:06:22.504074128Z" level=info msg="starting plugins..." Sep 12 23:06:22.504170 containerd[1553]: time="2025-09-12T23:06:22.504097808Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 23:06:22.505007 containerd[1553]: time="2025-09-12T23:06:22.504282946Z" level=info msg="containerd successfully booted in 0.335356s" Sep 12 23:06:22.504470 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:06:22.582986 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:06:22.585778 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:60118.service - OpenSSH per-connection server daemon (10.0.0.1:60118). Sep 12 23:06:22.686993 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 60118 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:06:22.688892 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:06:22.696860 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:06:22.699341 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:06:22.708067 systemd-logind[1543]: New session 1 of user core. Sep 12 23:06:22.781701 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:06:22.787994 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:06:22.808962 (systemd)[1650]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:06:22.812134 systemd-logind[1543]: New session c1 of user core. Sep 12 23:06:22.870285 systemd-networkd[1465]: eth0: Gained IPv6LL Sep 12 23:06:22.875456 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:06:22.877974 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:06:22.881763 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 23:06:22.896896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:06:22.900835 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:06:22.941849 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 23:06:22.942243 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 23:06:22.944460 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:06:22.946855 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:06:23.075685 systemd[1650]: Queued start job for default target default.target. Sep 12 23:06:23.150488 systemd[1650]: Created slice app.slice - User Application Slice. Sep 12 23:06:23.150517 systemd[1650]: Reached target paths.target - Paths. Sep 12 23:06:23.150590 systemd[1650]: Reached target timers.target - Timers. Sep 12 23:06:23.153302 systemd[1650]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:06:23.170238 systemd[1650]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:06:23.170452 systemd[1650]: Reached target sockets.target - Sockets. Sep 12 23:06:23.170516 systemd[1650]: Reached target basic.target - Basic System. Sep 12 23:06:23.170601 systemd[1650]: Reached target default.target - Main User Target. Sep 12 23:06:23.170655 systemd[1650]: Startup finished in 298ms. Sep 12 23:06:23.171075 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:06:23.188975 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:06:23.264993 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:60132.service - OpenSSH per-connection server daemon (10.0.0.1:60132). Sep 12 23:06:23.341250 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 60132 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:06:23.343545 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:06:23.349394 systemd-logind[1543]: New session 2 of user core. Sep 12 23:06:23.358907 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:06:23.420290 sshd[1682]: Connection closed by 10.0.0.1 port 60132 Sep 12 23:06:23.420850 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Sep 12 23:06:23.441655 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:60132.service: Deactivated successfully. Sep 12 23:06:23.446585 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:06:23.447495 systemd-logind[1543]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:06:23.452289 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:60148.service - OpenSSH per-connection server daemon (10.0.0.1:60148). Sep 12 23:06:23.454789 systemd-logind[1543]: Removed session 2. Sep 12 23:06:23.517854 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 60148 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:06:23.519893 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:06:23.525594 systemd-logind[1543]: New session 3 of user core. Sep 12 23:06:23.533777 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:06:23.595051 sshd[1691]: Connection closed by 10.0.0.1 port 60148 Sep 12 23:06:23.595605 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Sep 12 23:06:23.602351 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:60148.service: Deactivated successfully. Sep 12 23:06:23.604690 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:06:23.605571 systemd-logind[1543]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:06:23.607188 systemd-logind[1543]: Removed session 3. Sep 12 23:06:24.618042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:06:24.620207 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:06:24.621768 systemd[1]: Startup finished in 4.941s (kernel) + 10.103s (initrd) + 6.878s (userspace) = 21.924s. Sep 12 23:06:24.628245 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:06:25.903439 kubelet[1701]: E0912 23:06:25.903215 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:06:25.907445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:06:25.907735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:06:25.908219 systemd[1]: kubelet.service: Consumed 2.511s CPU time, 266.4M memory peak. Sep 12 23:06:33.719102 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:35914.service - OpenSSH per-connection server daemon (10.0.0.1:35914). Sep 12 23:06:33.784878 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 35914 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:06:33.787010 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:06:33.792139 systemd-logind[1543]: New session 4 of user core. Sep 12 23:06:33.801688 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:06:33.859699 sshd[1717]: Connection closed by 10.0.0.1 port 35914 Sep 12 23:06:33.860185 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Sep 12 23:06:33.874716 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:35914.service: Deactivated successfully. Sep 12 23:06:33.877413 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:06:33.878459 systemd-logind[1543]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:06:33.882808 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:35928.service - OpenSSH per-connection server daemon (10.0.0.1:35928). Sep 12 23:06:33.883558 systemd-logind[1543]: Removed session 4. Sep 12 23:06:33.961007 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 35928 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:06:33.963109 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:06:33.968503 systemd-logind[1543]: New session 5 of user core. Sep 12 23:06:33.981776 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:06:34.033560 sshd[1726]: Connection closed by 10.0.0.1 port 35928 Sep 12 23:06:34.034038 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Sep 12 23:06:34.046575 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:35928.service: Deactivated successfully. Sep 12 23:06:34.048643 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:06:34.049359 systemd-logind[1543]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:06:34.052045 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:35936.service - OpenSSH per-connection server daemon (10.0.0.1:35936). Sep 12 23:06:34.052971 systemd-logind[1543]: Removed session 5. Sep 12 23:06:34.109371 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 35936 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:06:34.110922 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:06:34.115927 systemd-logind[1543]: New session 6 of user core. Sep 12 23:06:34.125686 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:06:34.180226 sshd[1735]: Connection closed by 10.0.0.1 port 35936 Sep 12 23:06:34.180722 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Sep 12 23:06:34.193230 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:35936.service: Deactivated successfully. Sep 12 23:06:34.195190 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:06:34.196028 systemd-logind[1543]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:06:34.198934 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:35948.service - OpenSSH per-connection server daemon (10.0.0.1:35948). Sep 12 23:06:34.199480 systemd-logind[1543]: Removed session 6. Sep 12 23:06:34.259699 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 35948 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:06:34.261344 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:06:34.266042 systemd-logind[1543]: New session 7 of user core. Sep 12 23:06:34.275712 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:06:34.334928 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:06:34.335247 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:06:34.351710 sudo[1746]: pam_unix(sudo:session): session closed for user root Sep 12 23:06:34.353915 sshd[1745]: Connection closed by 10.0.0.1 port 35948 Sep 12 23:06:34.354438 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Sep 12 23:06:34.364979 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:35948.service: Deactivated successfully. Sep 12 23:06:34.367004 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:06:34.367928 systemd-logind[1543]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:06:34.371037 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:35950.service - OpenSSH per-connection server daemon (10.0.0.1:35950). Sep 12 23:06:34.371795 systemd-logind[1543]: Removed session 7. Sep 12 23:06:34.431572 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 35950 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:06:34.433665 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:06:34.439316 systemd-logind[1543]: New session 8 of user core. Sep 12 23:06:34.448782 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 23:06:34.505436 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:06:34.505817 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:06:34.513053 sudo[1757]: pam_unix(sudo:session): session closed for user root Sep 12 23:06:34.521307 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 23:06:34.521715 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:06:34.532747 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:06:34.586024 augenrules[1779]: No rules Sep 12 23:06:34.588203 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:06:34.588527 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:06:34.589800 sudo[1756]: pam_unix(sudo:session): session closed for user root Sep 12 23:06:34.591552 sshd[1755]: Connection closed by 10.0.0.1 port 35950 Sep 12 23:06:34.591910 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Sep 12 23:06:34.611682 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:35950.service: Deactivated successfully. Sep 12 23:06:34.613681 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 23:06:34.614441 systemd-logind[1543]: Session 8 logged out. Waiting for processes to exit. Sep 12 23:06:34.617516 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:35952.service - OpenSSH per-connection server daemon (10.0.0.1:35952). Sep 12 23:06:34.618627 systemd-logind[1543]: Removed session 8. Sep 12 23:06:34.680303 sshd[1788]: Accepted publickey for core from 10.0.0.1 port 35952 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:06:34.682310 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:06:34.687038 systemd-logind[1543]: New session 9 of user core. Sep 12 23:06:34.697668 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 23:06:34.754389 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:06:34.754836 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:06:35.800681 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:06:35.831293 (dockerd)[1812]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:06:36.158275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:06:36.160912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:06:36.514071 dockerd[1812]: time="2025-09-12T23:06:36.513932085Z" level=info msg="Starting up" Sep 12 23:06:36.519983 dockerd[1812]: time="2025-09-12T23:06:36.519899986Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 23:06:36.534271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:06:36.540501 dockerd[1812]: time="2025-09-12T23:06:36.540348938Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 23:06:36.550146 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:06:37.037036 kubelet[1837]: E0912 23:06:37.036668 1837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:06:37.045525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:06:37.045817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:06:37.046355 systemd[1]: kubelet.service: Consumed 428ms CPU time, 111M memory peak. Sep 12 23:06:37.253950 dockerd[1812]: time="2025-09-12T23:06:37.253841902Z" level=info msg="Loading containers: start." Sep 12 23:06:37.267567 kernel: Initializing XFRM netlink socket Sep 12 23:06:37.571581 systemd-networkd[1465]: docker0: Link UP Sep 12 23:06:37.576613 dockerd[1812]: time="2025-09-12T23:06:37.576544453Z" level=info msg="Loading containers: done." Sep 12 23:06:37.598765 dockerd[1812]: time="2025-09-12T23:06:37.598675149Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:06:37.598948 dockerd[1812]: time="2025-09-12T23:06:37.598849218Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 23:06:37.599024 dockerd[1812]: time="2025-09-12T23:06:37.598996828Z" level=info msg="Initializing buildkit" Sep 12 23:06:37.634899 dockerd[1812]: time="2025-09-12T23:06:37.634832625Z" level=info msg="Completed buildkit initialization" Sep 12 23:06:37.641607 dockerd[1812]: time="2025-09-12T23:06:37.641556231Z" level=info msg="Daemon has completed initialization" Sep 12 23:06:37.641776 dockerd[1812]: time="2025-09-12T23:06:37.641695983Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:06:37.641928 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:06:38.589976 containerd[1553]: time="2025-09-12T23:06:38.589907314Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 23:06:40.345261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2101451050.mount: Deactivated successfully. Sep 12 23:06:42.451722 containerd[1553]: time="2025-09-12T23:06:42.451626803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:42.452731 containerd[1553]: time="2025-09-12T23:06:42.452678687Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 23:06:42.454404 containerd[1553]: time="2025-09-12T23:06:42.454333963Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:42.458707 containerd[1553]: time="2025-09-12T23:06:42.458632660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:42.460459 containerd[1553]: time="2025-09-12T23:06:42.460391259Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.870408677s" Sep 12 23:06:42.460459 containerd[1553]: time="2025-09-12T23:06:42.460455750Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 23:06:42.461310 containerd[1553]: time="2025-09-12T23:06:42.461199440Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 23:06:43.786239 containerd[1553]: time="2025-09-12T23:06:43.786127664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:43.787492 containerd[1553]: time="2025-09-12T23:06:43.787433625Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 23:06:43.789179 containerd[1553]: time="2025-09-12T23:06:43.788940103Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:43.792102 containerd[1553]: time="2025-09-12T23:06:43.792051024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:43.794693 containerd[1553]: time="2025-09-12T23:06:43.794603855Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.333295826s" Sep 12 23:06:43.794693 containerd[1553]: time="2025-09-12T23:06:43.794690314Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 23:06:43.795259 containerd[1553]: time="2025-09-12T23:06:43.795180293Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 23:06:45.673850 containerd[1553]: time="2025-09-12T23:06:45.673783651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:45.674750 containerd[1553]: time="2025-09-12T23:06:45.674719123Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 23:06:45.680391 containerd[1553]: time="2025-09-12T23:06:45.680249879Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:45.685471 containerd[1553]: time="2025-09-12T23:06:45.685382919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:45.687055 containerd[1553]: time="2025-09-12T23:06:45.687000182Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.891762132s" Sep 12 23:06:45.687122 containerd[1553]: time="2025-09-12T23:06:45.687059946Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 23:06:45.688496 containerd[1553]: time="2025-09-12T23:06:45.688438035Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 23:06:47.179203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:06:47.181835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:06:47.603183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:06:47.616009 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:06:47.686400 kubelet[2125]: E0912 23:06:47.686276 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:06:47.692163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:06:47.692413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:06:47.693285 systemd[1]: kubelet.service: Consumed 421ms CPU time, 110.5M memory peak. Sep 12 23:06:47.919626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649213104.mount: Deactivated successfully. Sep 12 23:06:49.343458 containerd[1553]: time="2025-09-12T23:06:49.343388912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:49.371376 containerd[1553]: time="2025-09-12T23:06:49.371241471Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 23:06:49.419567 containerd[1553]: time="2025-09-12T23:06:49.419469199Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:49.457980 containerd[1553]: time="2025-09-12T23:06:49.457865699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:49.458914 containerd[1553]: time="2025-09-12T23:06:49.458833400Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 3.770351242s" Sep 12 23:06:49.458977 containerd[1553]: time="2025-09-12T23:06:49.458933058Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 23:06:49.459707 containerd[1553]: time="2025-09-12T23:06:49.459667760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 23:06:50.703101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311346010.mount: Deactivated successfully. Sep 12 23:06:52.454942 containerd[1553]: time="2025-09-12T23:06:52.454863123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:52.456018 containerd[1553]: time="2025-09-12T23:06:52.455975350Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 23:06:52.457395 containerd[1553]: time="2025-09-12T23:06:52.457344744Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:52.463564 containerd[1553]: time="2025-09-12T23:06:52.461153194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:52.463564 containerd[1553]: time="2025-09-12T23:06:52.463365872Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.003645094s" Sep 12 23:06:52.463564 containerd[1553]: time="2025-09-12T23:06:52.463429115Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 23:06:52.464783 containerd[1553]: time="2025-09-12T23:06:52.464727717Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:06:53.260702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2783722964.mount: Deactivated successfully. Sep 12 23:06:53.267036 containerd[1553]: time="2025-09-12T23:06:53.266956494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:06:53.267710 containerd[1553]: time="2025-09-12T23:06:53.267649913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 23:06:53.269171 containerd[1553]: time="2025-09-12T23:06:53.269138193Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:06:53.271632 containerd[1553]: time="2025-09-12T23:06:53.271574582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:06:53.272543 containerd[1553]: time="2025-09-12T23:06:53.272490960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 807.720454ms" Sep 12 23:06:53.272591 containerd[1553]: time="2025-09-12T23:06:53.272554594Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 23:06:53.273198 containerd[1553]: time="2025-09-12T23:06:53.273164924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 23:06:53.780372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2441782900.mount: Deactivated successfully. Sep 12 23:06:56.181959 containerd[1553]: time="2025-09-12T23:06:56.181867545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:56.183282 containerd[1553]: time="2025-09-12T23:06:56.183241623Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 23:06:56.184890 containerd[1553]: time="2025-09-12T23:06:56.184813656Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:56.189271 containerd[1553]: time="2025-09-12T23:06:56.189180609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:56.190517 containerd[1553]: time="2025-09-12T23:06:56.190433763Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.917231184s" Sep 12 23:06:56.190517 containerd[1553]: time="2025-09-12T23:06:56.190501967Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 23:06:57.928981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 23:06:57.930868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:06:58.162062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:06:58.174900 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:06:58.239231 kubelet[2280]: E0912 23:06:58.239045 2280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:06:58.243970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:06:58.244179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:06:58.244679 systemd[1]: kubelet.service: Consumed 256ms CPU time, 108.4M memory peak. Sep 12 23:06:58.583649 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:06:58.583913 systemd[1]: kubelet.service: Consumed 256ms CPU time, 108.4M memory peak. Sep 12 23:06:58.586572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:06:58.613026 systemd[1]: Reload requested from client PID 2295 ('systemctl') (unit session-9.scope)... Sep 12 23:06:58.613043 systemd[1]: Reloading... Sep 12 23:06:58.689585 zram_generator::config[2333]: No configuration found. Sep 12 23:06:59.369362 systemd[1]: Reloading finished in 755 ms. Sep 12 23:06:59.434449 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:06:59.434568 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:06:59.434900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:06:59.434949 systemd[1]: kubelet.service: Consumed 162ms CPU time, 98.3M memory peak. Sep 12 23:06:59.436652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:06:59.617014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:06:59.622006 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:06:59.672250 kubelet[2385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:06:59.672250 kubelet[2385]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:06:59.672250 kubelet[2385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:06:59.672695 kubelet[2385]: I0912 23:06:59.672300 2385 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:07:00.233823 kubelet[2385]: I0912 23:07:00.233743 2385 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 23:07:00.233823 kubelet[2385]: I0912 23:07:00.233790 2385 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:07:00.234205 kubelet[2385]: I0912 23:07:00.234173 2385 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 23:07:00.262935 kubelet[2385]: I0912 23:07:00.262877 2385 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:07:00.267218 kubelet[2385]: E0912 23:07:00.266915 2385 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:07:00.275976 kubelet[2385]: I0912 23:07:00.275945 2385 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 23:07:00.284510 kubelet[2385]: I0912 23:07:00.284453 2385 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:07:00.286180 kubelet[2385]: I0912 23:07:00.286121 2385 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:07:00.286406 kubelet[2385]: I0912 23:07:00.286169 2385 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:07:00.286505 kubelet[2385]: I0912 23:07:00.286420 2385 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:07:00.286505 kubelet[2385]: I0912 23:07:00.286430 2385 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 23:07:00.286686 kubelet[2385]: I0912 23:07:00.286661 2385 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:07:00.290072 kubelet[2385]: I0912 23:07:00.289831 2385 kubelet.go:446] "Attempting to sync node with API server" Sep 12 23:07:00.290072 kubelet[2385]: I0912 23:07:00.289873 2385 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:07:00.290072 kubelet[2385]: I0912 23:07:00.289938 2385 kubelet.go:352] "Adding apiserver pod source" Sep 12 23:07:00.290072 kubelet[2385]: I0912 23:07:00.289970 2385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:07:00.294562 kubelet[2385]: I0912 23:07:00.294499 2385 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 23:07:00.295191 kubelet[2385]: I0912 23:07:00.295163 2385 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:07:00.297030 kubelet[2385]: W0912 23:07:00.296947 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 12 23:07:00.297175 kubelet[2385]: E0912 23:07:00.297147 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:07:00.297264 kubelet[2385]: W0912 23:07:00.296946 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 12 23:07:00.297386 kubelet[2385]: E0912 23:07:00.297349 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:07:00.297386 kubelet[2385]: W0912 23:07:00.297188 2385 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:07:00.299889 kubelet[2385]: I0912 23:07:00.299849 2385 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:07:00.299968 kubelet[2385]: I0912 23:07:00.299914 2385 server.go:1287] "Started kubelet" Sep 12 23:07:00.300069 kubelet[2385]: I0912 23:07:00.300033 2385 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:07:00.301547 kubelet[2385]: I0912 23:07:00.301483 2385 server.go:479] "Adding debug handlers to kubelet server" Sep 12 23:07:00.302144 kubelet[2385]: I0912 23:07:00.301998 2385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:07:00.302586 kubelet[2385]: I0912 23:07:00.302561 2385 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:07:00.305454 kubelet[2385]: I0912 23:07:00.305414 2385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:07:00.306916 kubelet[2385]: I0912 23:07:00.306725 2385 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:07:00.307397 kubelet[2385]: E0912 23:07:00.307364 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:00.307468 kubelet[2385]: I0912 23:07:00.307407 2385 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:07:00.307886 kubelet[2385]: I0912 23:07:00.307841 2385 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:07:00.307999 kubelet[2385]: I0912 23:07:00.307979 2385 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:07:00.308622 kubelet[2385]: E0912 23:07:00.308591 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Sep 12 23:07:00.308787 kubelet[2385]: W0912 23:07:00.308694 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 12 23:07:00.308787 kubelet[2385]: E0912 23:07:00.308748 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:07:00.313219 kubelet[2385]: E0912 23:07:00.313161 2385 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:07:00.313219 kubelet[2385]: E0912 23:07:00.308941 2385 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864ab97292283e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 23:07:00.2998753 +0000 UTC m=+0.673973795,LastTimestamp:2025-09-12 23:07:00.2998753 +0000 UTC m=+0.673973795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 23:07:00.321250 kubelet[2385]: I0912 23:07:00.320159 2385 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:07:00.321250 kubelet[2385]: I0912 23:07:00.320293 2385 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:07:00.322224 kubelet[2385]: I0912 23:07:00.322201 2385 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:07:00.339765 kubelet[2385]: I0912 23:07:00.339671 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:07:00.341727 kubelet[2385]: I0912 23:07:00.341664 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:07:00.341727 kubelet[2385]: I0912 23:07:00.341718 2385 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 23:07:00.341884 kubelet[2385]: I0912 23:07:00.341763 2385 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:07:00.341884 kubelet[2385]: I0912 23:07:00.341776 2385 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 23:07:00.341884 kubelet[2385]: E0912 23:07:00.341851 2385 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:07:00.343340 kubelet[2385]: W0912 23:07:00.343308 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 12 23:07:00.343401 kubelet[2385]: E0912 23:07:00.343353 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:07:00.344122 kubelet[2385]: I0912 23:07:00.343624 2385 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:07:00.344122 kubelet[2385]: I0912 23:07:00.343648 2385 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:07:00.344122 kubelet[2385]: I0912 23:07:00.343679 2385 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:07:00.407919 kubelet[2385]: E0912 23:07:00.407787 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:00.442434 kubelet[2385]: E0912 23:07:00.442336 2385 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:07:00.508938 kubelet[2385]: E0912 23:07:00.508724 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:00.509398 kubelet[2385]: E0912 23:07:00.509337 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Sep 12 23:07:00.608998 kubelet[2385]: E0912 23:07:00.608886 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:00.643252 kubelet[2385]: E0912 23:07:00.643164 2385 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:07:00.709657 kubelet[2385]: E0912 23:07:00.709586 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:00.810575 kubelet[2385]: E0912 23:07:00.810328 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:00.910558 kubelet[2385]: E0912 23:07:00.910468 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:00.910736 kubelet[2385]: E0912 23:07:00.910576 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Sep 12 23:07:01.011298 kubelet[2385]: E0912 23:07:01.011192 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:01.043821 kubelet[2385]: E0912 23:07:01.043744 2385 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:07:01.112514 kubelet[2385]: E0912 23:07:01.112333 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:01.213310 kubelet[2385]: E0912 23:07:01.213211 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:01.276874 kubelet[2385]: W0912 23:07:01.276774 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 12 23:07:01.277033 kubelet[2385]: E0912 23:07:01.276883 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:07:01.313830 kubelet[2385]: E0912 23:07:01.313743 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:01.322656 kubelet[2385]: W0912 23:07:01.322602 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 12 23:07:01.322758 kubelet[2385]: E0912 23:07:01.322665 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:07:01.330346 kubelet[2385]: W0912 23:07:01.330277 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 12 23:07:01.330346 kubelet[2385]: E0912 23:07:01.330340 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:07:01.414651 kubelet[2385]: E0912 23:07:01.414470 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:01.487487 kubelet[2385]: I0912 23:07:01.487397 2385 policy_none.go:49] "None policy: Start" Sep 12 23:07:01.487487 kubelet[2385]: I0912 23:07:01.487472 2385 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:07:01.487487 kubelet[2385]: I0912 23:07:01.487497 2385 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:07:01.515264 kubelet[2385]: E0912 23:07:01.515200 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:01.519181 kubelet[2385]: E0912 23:07:01.519028 2385 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864ab97292283e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 23:07:00.2998753 +0000 UTC m=+0.673973795,LastTimestamp:2025-09-12 23:07:00.2998753 +0000 UTC m=+0.673973795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 23:07:01.532715 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:07:01.551667 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:07:01.555466 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:07:01.575038 kubelet[2385]: I0912 23:07:01.574841 2385 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:07:01.575414 kubelet[2385]: I0912 23:07:01.575149 2385 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:07:01.575414 kubelet[2385]: I0912 23:07:01.575175 2385 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:07:01.575517 kubelet[2385]: I0912 23:07:01.575465 2385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:07:01.576607 kubelet[2385]: E0912 23:07:01.576580 2385 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:07:01.576680 kubelet[2385]: E0912 23:07:01.576624 2385 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 23:07:01.677560 kubelet[2385]: I0912 23:07:01.677176 2385 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:07:01.677723 kubelet[2385]: E0912 23:07:01.677698 2385 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 12 23:07:01.712083 kubelet[2385]: E0912 23:07:01.712015 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="1.6s" Sep 12 23:07:01.718692 kubelet[2385]: W0912 23:07:01.718627 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 12 23:07:01.718770 kubelet[2385]: E0912 23:07:01.718692 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:07:01.854170 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 23:07:01.876961 kubelet[2385]: E0912 23:07:01.876910 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:07:01.878940 kubelet[2385]: I0912 23:07:01.878912 2385 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:07:01.879433 kubelet[2385]: E0912 23:07:01.879383 2385 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 12 23:07:01.882416 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 23:07:01.884595 kubelet[2385]: E0912 23:07:01.884569 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:07:01.886684 systemd[1]: Created slice kubepods-burstable-podeb2be56280e3a58d39a71cfbff904666.slice - libcontainer container kubepods-burstable-podeb2be56280e3a58d39a71cfbff904666.slice. Sep 12 23:07:01.888417 kubelet[2385]: E0912 23:07:01.888389 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:07:01.919322 kubelet[2385]: I0912 23:07:01.919226 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:01.919322 kubelet[2385]: I0912 23:07:01.919307 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb2be56280e3a58d39a71cfbff904666-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eb2be56280e3a58d39a71cfbff904666\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:07:01.919322 kubelet[2385]: I0912 23:07:01.919335 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb2be56280e3a58d39a71cfbff904666-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eb2be56280e3a58d39a71cfbff904666\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:07:01.919627 kubelet[2385]: I0912 23:07:01.919361 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:01.919627 kubelet[2385]: I0912 23:07:01.919389 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:01.919627 kubelet[2385]: I0912 23:07:01.919484 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:01.919627 kubelet[2385]: I0912 23:07:01.919568 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:01.919627 kubelet[2385]: I0912 23:07:01.919611 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:07:01.919761 kubelet[2385]: I0912 23:07:01.919642 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb2be56280e3a58d39a71cfbff904666-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eb2be56280e3a58d39a71cfbff904666\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:07:02.178232 kubelet[2385]: E0912 23:07:02.178108 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:02.179157 containerd[1553]: time="2025-09-12T23:07:02.179095023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 23:07:02.185639 kubelet[2385]: E0912 23:07:02.185572 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:02.186211 containerd[1553]: time="2025-09-12T23:07:02.186164777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 23:07:02.189666 kubelet[2385]: E0912 23:07:02.189627 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:02.190237 containerd[1553]: time="2025-09-12T23:07:02.190176853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eb2be56280e3a58d39a71cfbff904666,Namespace:kube-system,Attempt:0,}" Sep 12 23:07:02.282116 kubelet[2385]: I0912 23:07:02.282052 2385 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:07:02.282612 kubelet[2385]: E0912 23:07:02.282518 2385 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 12 23:07:02.445293 kubelet[2385]: E0912 23:07:02.445137 2385 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:07:03.084293 kubelet[2385]: I0912 23:07:03.084231 2385 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:07:03.133900 kubelet[2385]: E0912 23:07:03.133781 2385 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 12 23:07:03.154233 containerd[1553]: time="2025-09-12T23:07:03.154145281Z" level=info msg="connecting to shim c41bd23bff67ebf43d675a101a9e3f86e77ac3e4ee69811e205ab24ab1b03f11" address="unix:///run/containerd/s/7ef190fe7b9f663030314df05ae48db46cb6fadf2d32ef2055df1f55cc3d9f84" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:07:03.154866 containerd[1553]: time="2025-09-12T23:07:03.154775968Z" level=info msg="connecting to shim 5532e198081c326524ea523fcaad16b56b790e4108ff43854812346986b27900" address="unix:///run/containerd/s/d37e29069f4a9933ea9688ffec17f0d65941fcb21b46ba45318185554816e614" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:07:03.159109 containerd[1553]: time="2025-09-12T23:07:03.159028731Z" level=info msg="connecting to shim 0c6f3a72e43ec3ad5f400c070f0359cb504519699e8b18a4de41ed32887e8cd6" address="unix:///run/containerd/s/629a807afa6d681d76c3c4f6985d87a3c2d005f04a2d11051f5dc1b94002fdfd" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:07:03.314210 kubelet[2385]: E0912 23:07:03.314153 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="3.2s" Sep 12 23:07:03.318205 systemd[1]: Started cri-containerd-0c6f3a72e43ec3ad5f400c070f0359cb504519699e8b18a4de41ed32887e8cd6.scope - libcontainer container 0c6f3a72e43ec3ad5f400c070f0359cb504519699e8b18a4de41ed32887e8cd6. Sep 12 23:07:03.320913 systemd[1]: Started cri-containerd-c41bd23bff67ebf43d675a101a9e3f86e77ac3e4ee69811e205ab24ab1b03f11.scope - libcontainer container c41bd23bff67ebf43d675a101a9e3f86e77ac3e4ee69811e205ab24ab1b03f11. Sep 12 23:07:03.379805 systemd[1]: Started cri-containerd-5532e198081c326524ea523fcaad16b56b790e4108ff43854812346986b27900.scope - libcontainer container 5532e198081c326524ea523fcaad16b56b790e4108ff43854812346986b27900. Sep 12 23:07:03.457357 containerd[1553]: time="2025-09-12T23:07:03.457196609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c6f3a72e43ec3ad5f400c070f0359cb504519699e8b18a4de41ed32887e8cd6\"" Sep 12 23:07:03.461594 kubelet[2385]: E0912 23:07:03.461199 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:03.461875 containerd[1553]: time="2025-09-12T23:07:03.461844094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c41bd23bff67ebf43d675a101a9e3f86e77ac3e4ee69811e205ab24ab1b03f11\"" Sep 12 23:07:03.463516 kubelet[2385]: E0912 23:07:03.463492 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:03.465705 containerd[1553]: time="2025-09-12T23:07:03.465671841Z" level=info msg="CreateContainer within sandbox \"0c6f3a72e43ec3ad5f400c070f0359cb504519699e8b18a4de41ed32887e8cd6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:07:03.466550 containerd[1553]: time="2025-09-12T23:07:03.465915150Z" level=info msg="CreateContainer within sandbox \"c41bd23bff67ebf43d675a101a9e3f86e77ac3e4ee69811e205ab24ab1b03f11\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:07:03.478739 containerd[1553]: time="2025-09-12T23:07:03.478682410Z" level=info msg="Container 536bae9ff25bfa14c4d6a5012f852bde4c6916f8f10c15944bd4f3a6559fcc07: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:03.487526 containerd[1553]: time="2025-09-12T23:07:03.487443623Z" level=info msg="Container e0758010090ea3350b69ab127e4f94694794536159a42045d83859418b05d464: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:03.494934 containerd[1553]: time="2025-09-12T23:07:03.494858844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eb2be56280e3a58d39a71cfbff904666,Namespace:kube-system,Attempt:0,} returns sandbox id \"5532e198081c326524ea523fcaad16b56b790e4108ff43854812346986b27900\"" Sep 12 23:07:03.495894 kubelet[2385]: E0912 23:07:03.495860 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:03.498105 containerd[1553]: time="2025-09-12T23:07:03.498036494Z" level=info msg="CreateContainer within sandbox \"5532e198081c326524ea523fcaad16b56b790e4108ff43854812346986b27900\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:07:03.506012 containerd[1553]: time="2025-09-12T23:07:03.505946941Z" level=info msg="CreateContainer within sandbox \"c41bd23bff67ebf43d675a101a9e3f86e77ac3e4ee69811e205ab24ab1b03f11\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"536bae9ff25bfa14c4d6a5012f852bde4c6916f8f10c15944bd4f3a6559fcc07\"" Sep 12 23:07:03.506889 containerd[1553]: time="2025-09-12T23:07:03.506848675Z" level=info msg="CreateContainer within sandbox \"0c6f3a72e43ec3ad5f400c070f0359cb504519699e8b18a4de41ed32887e8cd6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e0758010090ea3350b69ab127e4f94694794536159a42045d83859418b05d464\"" Sep 12 23:07:03.507098 containerd[1553]: time="2025-09-12T23:07:03.507055878Z" level=info msg="StartContainer for \"536bae9ff25bfa14c4d6a5012f852bde4c6916f8f10c15944bd4f3a6559fcc07\"" Sep 12 23:07:03.508333 containerd[1553]: time="2025-09-12T23:07:03.508284920Z" level=info msg="connecting to shim 536bae9ff25bfa14c4d6a5012f852bde4c6916f8f10c15944bd4f3a6559fcc07" address="unix:///run/containerd/s/7ef190fe7b9f663030314df05ae48db46cb6fadf2d32ef2055df1f55cc3d9f84" protocol=ttrpc version=3 Sep 12 23:07:03.508971 containerd[1553]: time="2025-09-12T23:07:03.508931589Z" level=info msg="StartContainer for \"e0758010090ea3350b69ab127e4f94694794536159a42045d83859418b05d464\"" Sep 12 23:07:03.510076 containerd[1553]: time="2025-09-12T23:07:03.510045516Z" level=info msg="connecting to shim e0758010090ea3350b69ab127e4f94694794536159a42045d83859418b05d464" address="unix:///run/containerd/s/629a807afa6d681d76c3c4f6985d87a3c2d005f04a2d11051f5dc1b94002fdfd" protocol=ttrpc version=3 Sep 12 23:07:03.513567 containerd[1553]: time="2025-09-12T23:07:03.513372504Z" level=info msg="Container 5823e0cdcdd60ce47c156348e5ed167f11fad6a4d2b89d6b648a31fb8c89f93b: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:03.535868 containerd[1553]: time="2025-09-12T23:07:03.535725326Z" level=info msg="CreateContainer within sandbox \"5532e198081c326524ea523fcaad16b56b790e4108ff43854812346986b27900\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5823e0cdcdd60ce47c156348e5ed167f11fad6a4d2b89d6b648a31fb8c89f93b\"" Sep 12 23:07:03.536642 containerd[1553]: time="2025-09-12T23:07:03.536608431Z" level=info msg="StartContainer for \"5823e0cdcdd60ce47c156348e5ed167f11fad6a4d2b89d6b648a31fb8c89f93b\"" Sep 12 23:07:03.537879 systemd[1]: Started cri-containerd-536bae9ff25bfa14c4d6a5012f852bde4c6916f8f10c15944bd4f3a6559fcc07.scope - libcontainer container 536bae9ff25bfa14c4d6a5012f852bde4c6916f8f10c15944bd4f3a6559fcc07. Sep 12 23:07:03.540871 containerd[1553]: time="2025-09-12T23:07:03.540817029Z" level=info msg="connecting to shim 5823e0cdcdd60ce47c156348e5ed167f11fad6a4d2b89d6b648a31fb8c89f93b" address="unix:///run/containerd/s/d37e29069f4a9933ea9688ffec17f0d65941fcb21b46ba45318185554816e614" protocol=ttrpc version=3 Sep 12 23:07:03.562824 systemd[1]: Started cri-containerd-e0758010090ea3350b69ab127e4f94694794536159a42045d83859418b05d464.scope - libcontainer container e0758010090ea3350b69ab127e4f94694794536159a42045d83859418b05d464. Sep 12 23:07:03.587725 systemd[1]: Started cri-containerd-5823e0cdcdd60ce47c156348e5ed167f11fad6a4d2b89d6b648a31fb8c89f93b.scope - libcontainer container 5823e0cdcdd60ce47c156348e5ed167f11fad6a4d2b89d6b648a31fb8c89f93b. Sep 12 23:07:03.607466 kubelet[2385]: W0912 23:07:03.607370 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 12 23:07:03.607735 kubelet[2385]: E0912 23:07:03.607601 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:07:03.643587 containerd[1553]: time="2025-09-12T23:07:03.643129903Z" level=info msg="StartContainer for \"536bae9ff25bfa14c4d6a5012f852bde4c6916f8f10c15944bd4f3a6559fcc07\" returns successfully" Sep 12 23:07:03.673826 containerd[1553]: time="2025-09-12T23:07:03.673767742Z" level=info msg="StartContainer for \"e0758010090ea3350b69ab127e4f94694794536159a42045d83859418b05d464\" returns successfully" Sep 12 23:07:03.689491 containerd[1553]: time="2025-09-12T23:07:03.689398862Z" level=info msg="StartContainer for \"5823e0cdcdd60ce47c156348e5ed167f11fad6a4d2b89d6b648a31fb8c89f93b\" returns successfully" Sep 12 23:07:03.701004 kubelet[2385]: W0912 23:07:03.700854 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 12 23:07:03.701004 kubelet[2385]: E0912 23:07:03.700958 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:07:04.365568 kubelet[2385]: E0912 23:07:04.365501 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:07:04.366156 kubelet[2385]: E0912 23:07:04.365903 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:04.368336 kubelet[2385]: E0912 23:07:04.368081 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:07:04.368336 kubelet[2385]: E0912 23:07:04.368275 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:04.371461 kubelet[2385]: E0912 23:07:04.371336 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:07:04.371599 kubelet[2385]: E0912 23:07:04.371583 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:04.736134 kubelet[2385]: I0912 23:07:04.736059 2385 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:07:05.378368 kubelet[2385]: E0912 23:07:05.378312 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:07:05.378881 kubelet[2385]: E0912 23:07:05.378497 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:05.380883 kubelet[2385]: E0912 23:07:05.380856 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:07:05.381163 kubelet[2385]: E0912 23:07:05.381145 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:05.389308 kubelet[2385]: I0912 23:07:05.389239 2385 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 23:07:05.389308 kubelet[2385]: E0912 23:07:05.389303 2385 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 23:07:05.406048 kubelet[2385]: E0912 23:07:05.405990 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:05.507069 kubelet[2385]: E0912 23:07:05.506978 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:05.608308 kubelet[2385]: E0912 23:07:05.608227 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:05.708986 kubelet[2385]: E0912 23:07:05.708905 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:05.809494 kubelet[2385]: E0912 23:07:05.809406 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:05.910180 kubelet[2385]: E0912 23:07:05.910095 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:06.010995 kubelet[2385]: E0912 23:07:06.010797 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:06.111796 kubelet[2385]: E0912 23:07:06.111693 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:06.212900 kubelet[2385]: E0912 23:07:06.212814 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:06.313822 kubelet[2385]: E0912 23:07:06.313445 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:06.413687 kubelet[2385]: E0912 23:07:06.413613 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:06.514644 kubelet[2385]: E0912 23:07:06.514445 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:06.614974 kubelet[2385]: E0912 23:07:06.614819 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:06.700185 update_engine[1544]: I20250912 23:07:06.700070 1544 update_attempter.cc:509] Updating boot flags... Sep 12 23:07:06.715682 kubelet[2385]: E0912 23:07:06.715613 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:06.819584 kubelet[2385]: E0912 23:07:06.816624 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:06.909085 kubelet[2385]: I0912 23:07:06.908945 2385 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 23:07:06.929707 kubelet[2385]: I0912 23:07:06.929662 2385 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:06.942219 kubelet[2385]: I0912 23:07:06.942086 2385 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:07:07.296445 kubelet[2385]: I0912 23:07:07.296386 2385 apiserver.go:52] "Watching apiserver" Sep 12 23:07:07.300047 kubelet[2385]: E0912 23:07:07.299985 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:07.300456 kubelet[2385]: E0912 23:07:07.300387 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:07.300456 kubelet[2385]: E0912 23:07:07.300401 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:07.309461 kubelet[2385]: I0912 23:07:07.309375 2385 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:07:07.585340 systemd[1]: Reload requested from client PID 2685 ('systemctl') (unit session-9.scope)... Sep 12 23:07:07.585366 systemd[1]: Reloading... Sep 12 23:07:07.622802 kubelet[2385]: E0912 23:07:07.622730 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:07.761004 zram_generator::config[2731]: No configuration found. Sep 12 23:07:08.095850 systemd[1]: Reloading finished in 509 ms. Sep 12 23:07:08.136196 kubelet[2385]: I0912 23:07:08.136074 2385 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:07:08.136380 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:07:08.155468 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:07:08.155915 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:07:08.155985 systemd[1]: kubelet.service: Consumed 1.587s CPU time, 131.9M memory peak. Sep 12 23:07:08.160385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:07:08.423150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:07:08.442384 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:07:08.501773 kubelet[2773]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:07:08.501773 kubelet[2773]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:07:08.501773 kubelet[2773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:07:08.502335 kubelet[2773]: I0912 23:07:08.501869 2773 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:07:08.515450 kubelet[2773]: I0912 23:07:08.515357 2773 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 23:07:08.515450 kubelet[2773]: I0912 23:07:08.515402 2773 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:07:08.515853 kubelet[2773]: I0912 23:07:08.515806 2773 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 23:07:08.517253 kubelet[2773]: I0912 23:07:08.517221 2773 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 23:07:08.519606 kubelet[2773]: I0912 23:07:08.519572 2773 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:07:08.524730 kubelet[2773]: I0912 23:07:08.524690 2773 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 23:07:08.530665 kubelet[2773]: I0912 23:07:08.530611 2773 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:07:08.531137 kubelet[2773]: I0912 23:07:08.531095 2773 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:07:08.531423 kubelet[2773]: I0912 23:07:08.531133 2773 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:07:08.531564 kubelet[2773]: I0912 23:07:08.531437 2773 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:07:08.531564 kubelet[2773]: I0912 23:07:08.531451 2773 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 23:07:08.531564 kubelet[2773]: I0912 23:07:08.531515 2773 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:07:08.531766 kubelet[2773]: I0912 23:07:08.531745 2773 kubelet.go:446] "Attempting to sync node with API server" Sep 12 23:07:08.531805 kubelet[2773]: I0912 23:07:08.531779 2773 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:07:08.531857 kubelet[2773]: I0912 23:07:08.531826 2773 kubelet.go:352] "Adding apiserver pod source" Sep 12 23:07:08.531857 kubelet[2773]: I0912 23:07:08.531839 2773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:07:08.533783 kubelet[2773]: I0912 23:07:08.533752 2773 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 23:07:08.534662 kubelet[2773]: I0912 23:07:08.534601 2773 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:07:08.535413 kubelet[2773]: I0912 23:07:08.535393 2773 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:07:08.535470 kubelet[2773]: I0912 23:07:08.535438 2773 server.go:1287] "Started kubelet" Sep 12 23:07:08.536045 kubelet[2773]: I0912 23:07:08.535962 2773 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:07:08.537677 kubelet[2773]: I0912 23:07:08.537656 2773 server.go:479] "Adding debug handlers to kubelet server" Sep 12 23:07:08.538204 kubelet[2773]: I0912 23:07:08.538177 2773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:07:08.539142 kubelet[2773]: I0912 23:07:08.539090 2773 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:07:08.539481 kubelet[2773]: I0912 23:07:08.539463 2773 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:07:08.540807 kubelet[2773]: I0912 23:07:08.540778 2773 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:07:08.546437 kubelet[2773]: I0912 23:07:08.546403 2773 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:07:08.546614 kubelet[2773]: E0912 23:07:08.546564 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:07:08.546734 kubelet[2773]: I0912 23:07:08.546720 2773 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:07:08.546904 kubelet[2773]: I0912 23:07:08.546889 2773 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:07:08.548867 kubelet[2773]: I0912 23:07:08.548803 2773 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:07:08.553669 kubelet[2773]: I0912 23:07:08.553578 2773 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:07:08.556949 kubelet[2773]: I0912 23:07:08.556904 2773 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:07:08.563457 kubelet[2773]: E0912 23:07:08.563396 2773 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:07:08.570980 kubelet[2773]: I0912 23:07:08.570858 2773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:07:08.573901 kubelet[2773]: I0912 23:07:08.573860 2773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:07:08.573901 kubelet[2773]: I0912 23:07:08.573908 2773 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 23:07:08.573901 kubelet[2773]: I0912 23:07:08.573937 2773 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:07:08.573901 kubelet[2773]: I0912 23:07:08.573945 2773 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 23:07:08.573901 kubelet[2773]: E0912 23:07:08.574000 2773 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:07:08.602933 kubelet[2773]: I0912 23:07:08.602858 2773 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:07:08.602933 kubelet[2773]: I0912 23:07:08.602882 2773 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:07:08.602933 kubelet[2773]: I0912 23:07:08.602905 2773 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:07:08.603320 kubelet[2773]: I0912 23:07:08.603119 2773 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:07:08.603320 kubelet[2773]: I0912 23:07:08.603135 2773 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:07:08.603320 kubelet[2773]: I0912 23:07:08.603186 2773 policy_none.go:49] "None policy: Start" Sep 12 23:07:08.603320 kubelet[2773]: I0912 23:07:08.603206 2773 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:07:08.603320 kubelet[2773]: I0912 23:07:08.603233 2773 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:07:08.603502 kubelet[2773]: I0912 23:07:08.603369 2773 state_mem.go:75] "Updated machine memory state" Sep 12 23:07:08.608742 kubelet[2773]: I0912 23:07:08.608679 2773 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:07:08.608973 kubelet[2773]: I0912 23:07:08.608951 2773 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:07:08.609054 kubelet[2773]: I0912 23:07:08.608969 2773 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:07:08.609906 kubelet[2773]: I0912 23:07:08.609242 2773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:07:08.612083 kubelet[2773]: E0912 23:07:08.612031 2773 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:07:08.675406 kubelet[2773]: I0912 23:07:08.675243 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 23:07:08.675406 kubelet[2773]: I0912 23:07:08.675312 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:08.675629 kubelet[2773]: I0912 23:07:08.675324 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:07:08.715713 kubelet[2773]: I0912 23:07:08.715667 2773 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:07:08.748401 kubelet[2773]: I0912 23:07:08.748331 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:08.748401 kubelet[2773]: I0912 23:07:08.748395 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb2be56280e3a58d39a71cfbff904666-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eb2be56280e3a58d39a71cfbff904666\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:07:08.748401 kubelet[2773]: I0912 23:07:08.748413 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb2be56280e3a58d39a71cfbff904666-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eb2be56280e3a58d39a71cfbff904666\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:07:08.748401 kubelet[2773]: I0912 23:07:08.748435 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb2be56280e3a58d39a71cfbff904666-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eb2be56280e3a58d39a71cfbff904666\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:07:08.748778 kubelet[2773]: I0912 23:07:08.748455 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:07:08.748778 kubelet[2773]: I0912 23:07:08.748495 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:08.748778 kubelet[2773]: I0912 23:07:08.748590 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:08.748778 kubelet[2773]: I0912 23:07:08.748657 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:08.748778 kubelet[2773]: I0912 23:07:08.748682 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:08.943111 kubelet[2773]: E0912 23:07:08.942968 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:08.943650 kubelet[2773]: E0912 23:07:08.943160 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 23:07:08.943650 kubelet[2773]: E0912 23:07:08.943366 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:08.943650 kubelet[2773]: E0912 23:07:08.943501 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 23:07:08.944025 kubelet[2773]: E0912 23:07:08.943784 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:08.944025 kubelet[2773]: E0912 23:07:08.943861 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:08.945179 kubelet[2773]: I0912 23:07:08.945115 2773 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 23:07:08.945347 kubelet[2773]: I0912 23:07:08.945197 2773 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 23:07:09.158807 sudo[2811]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 23:07:09.159244 sudo[2811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 23:07:09.533604 kubelet[2773]: I0912 23:07:09.533525 2773 apiserver.go:52] "Watching apiserver" Sep 12 23:07:09.547834 kubelet[2773]: I0912 23:07:09.547786 2773 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:07:09.568179 sudo[2811]: pam_unix(sudo:session): session closed for user root Sep 12 23:07:09.587180 kubelet[2773]: I0912 23:07:09.587138 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:07:09.587338 kubelet[2773]: E0912 23:07:09.587307 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:09.587796 kubelet[2773]: I0912 23:07:09.587769 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:09.735981 kubelet[2773]: E0912 23:07:09.735912 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:07:09.736257 kubelet[2773]: E0912 23:07:09.736228 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:09.737520 kubelet[2773]: E0912 23:07:09.737496 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 23:07:09.738291 kubelet[2773]: E0912 23:07:09.737648 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:09.953773 kubelet[2773]: I0912 23:07:09.953664 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.95362759 podStartE2EDuration="3.95362759s" podCreationTimestamp="2025-09-12 23:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:07:09.737347481 +0000 UTC m=+1.289349234" watchObservedRunningTime="2025-09-12 23:07:09.95362759 +0000 UTC m=+1.505629353" Sep 12 23:07:09.964322 kubelet[2773]: I0912 23:07:09.964077 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.964013325 podStartE2EDuration="3.964013325s" podCreationTimestamp="2025-09-12 23:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:07:09.954029771 +0000 UTC m=+1.506031534" watchObservedRunningTime="2025-09-12 23:07:09.964013325 +0000 UTC m=+1.516015088" Sep 12 23:07:09.964322 kubelet[2773]: I0912 23:07:09.964258 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.964250154 podStartE2EDuration="3.964250154s" podCreationTimestamp="2025-09-12 23:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:07:09.963982341 +0000 UTC m=+1.515984124" watchObservedRunningTime="2025-09-12 23:07:09.964250154 +0000 UTC m=+1.516251927" Sep 12 23:07:10.589679 kubelet[2773]: E0912 23:07:10.589478 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:10.590479 kubelet[2773]: E0912 23:07:10.590350 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:10.591878 kubelet[2773]: E0912 23:07:10.591737 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:12.369553 kubelet[2773]: I0912 23:07:12.369488 2773 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:07:12.370029 containerd[1553]: time="2025-09-12T23:07:12.369872788Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:07:12.370283 kubelet[2773]: I0912 23:07:12.370061 2773 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:07:12.965279 systemd[1]: Created slice kubepods-besteffort-pod9dadb250_8bda_4836_993c_cd79417ff65f.slice - libcontainer container kubepods-besteffort-pod9dadb250_8bda_4836_993c_cd79417ff65f.slice. Sep 12 23:07:12.979572 kubelet[2773]: I0912 23:07:12.978363 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw8dn\" (UniqueName: \"kubernetes.io/projected/9dadb250-8bda-4836-993c-cd79417ff65f-kube-api-access-pw8dn\") pod \"kube-proxy-ccsns\" (UID: \"9dadb250-8bda-4836-993c-cd79417ff65f\") " pod="kube-system/kube-proxy-ccsns" Sep 12 23:07:12.979572 kubelet[2773]: I0912 23:07:12.978424 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-hostproc\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.979572 kubelet[2773]: I0912 23:07:12.978461 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-xtables-lock\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.983594 kubelet[2773]: I0912 23:07:12.980610 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-bpf-maps\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.983594 kubelet[2773]: I0912 23:07:12.980676 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-etc-cni-netd\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.983594 kubelet[2773]: I0912 23:07:12.980718 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9dadb250-8bda-4836-993c-cd79417ff65f-lib-modules\") pod \"kube-proxy-ccsns\" (UID: \"9dadb250-8bda-4836-993c-cd79417ff65f\") " pod="kube-system/kube-proxy-ccsns" Sep 12 23:07:12.983594 kubelet[2773]: I0912 23:07:12.980871 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9dadb250-8bda-4836-993c-cd79417ff65f-xtables-lock\") pod \"kube-proxy-ccsns\" (UID: \"9dadb250-8bda-4836-993c-cd79417ff65f\") " pod="kube-system/kube-proxy-ccsns" Sep 12 23:07:12.983594 kubelet[2773]: I0912 23:07:12.980934 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cni-path\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.983594 kubelet[2773]: I0912 23:07:12.980967 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhdgk\" (UniqueName: \"kubernetes.io/projected/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-kube-api-access-hhdgk\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.981514 systemd[1]: Created slice kubepods-burstable-pod58cd7ad8_76b6_40d6_91d3_38f73e72e0bf.slice - libcontainer container kubepods-burstable-pod58cd7ad8_76b6_40d6_91d3_38f73e72e0bf.slice. Sep 12 23:07:12.984012 kubelet[2773]: I0912 23:07:12.981000 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-lib-modules\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.984012 kubelet[2773]: I0912 23:07:12.981056 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-host-proc-sys-net\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.984012 kubelet[2773]: I0912 23:07:12.981131 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-host-proc-sys-kernel\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.984012 kubelet[2773]: I0912 23:07:12.981192 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cilium-run\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.984012 kubelet[2773]: I0912 23:07:12.981231 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-clustermesh-secrets\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.984012 kubelet[2773]: I0912 23:07:12.981255 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-hubble-tls\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.984203 kubelet[2773]: I0912 23:07:12.981286 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9dadb250-8bda-4836-993c-cd79417ff65f-kube-proxy\") pod \"kube-proxy-ccsns\" (UID: \"9dadb250-8bda-4836-993c-cd79417ff65f\") " pod="kube-system/kube-proxy-ccsns" Sep 12 23:07:12.984203 kubelet[2773]: I0912 23:07:12.981336 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cilium-cgroup\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:12.984203 kubelet[2773]: I0912 23:07:12.981366 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cilium-config-path\") pod \"cilium-756k2\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " pod="kube-system/cilium-756k2" Sep 12 23:07:13.013577 sudo[1792]: pam_unix(sudo:session): session closed for user root Sep 12 23:07:13.016062 sshd[1791]: Connection closed by 10.0.0.1 port 35952 Sep 12 23:07:13.016511 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:13.024063 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:35952.service: Deactivated successfully. Sep 12 23:07:13.027101 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 23:07:13.027362 systemd[1]: session-9.scope: Consumed 5.537s CPU time, 259.7M memory peak. Sep 12 23:07:13.028979 systemd-logind[1543]: Session 9 logged out. Waiting for processes to exit. Sep 12 23:07:13.030645 systemd-logind[1543]: Removed session 9. Sep 12 23:07:13.167182 systemd[1]: Created slice kubepods-besteffort-pode82e2d13_8fbc_4a54_b626_ea2bf0511849.slice - libcontainer container kubepods-besteffort-pode82e2d13_8fbc_4a54_b626_ea2bf0511849.slice. Sep 12 23:07:13.183505 kubelet[2773]: I0912 23:07:13.183412 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljdq7\" (UniqueName: \"kubernetes.io/projected/e82e2d13-8fbc-4a54-b626-ea2bf0511849-kube-api-access-ljdq7\") pod \"cilium-operator-6c4d7847fc-tzzwv\" (UID: \"e82e2d13-8fbc-4a54-b626-ea2bf0511849\") " pod="kube-system/cilium-operator-6c4d7847fc-tzzwv" Sep 12 23:07:13.183505 kubelet[2773]: I0912 23:07:13.183494 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e82e2d13-8fbc-4a54-b626-ea2bf0511849-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-tzzwv\" (UID: \"e82e2d13-8fbc-4a54-b626-ea2bf0511849\") " pod="kube-system/cilium-operator-6c4d7847fc-tzzwv" Sep 12 23:07:13.280861 kubelet[2773]: E0912 23:07:13.280662 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:13.282494 containerd[1553]: time="2025-09-12T23:07:13.282138357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ccsns,Uid:9dadb250-8bda-4836-993c-cd79417ff65f,Namespace:kube-system,Attempt:0,}" Sep 12 23:07:13.290833 kubelet[2773]: E0912 23:07:13.290774 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:13.291484 containerd[1553]: time="2025-09-12T23:07:13.291425459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-756k2,Uid:58cd7ad8-76b6-40d6-91d3-38f73e72e0bf,Namespace:kube-system,Attempt:0,}" Sep 12 23:07:13.475265 kubelet[2773]: E0912 23:07:13.475190 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:13.475987 containerd[1553]: time="2025-09-12T23:07:13.475944570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tzzwv,Uid:e82e2d13-8fbc-4a54-b626-ea2bf0511849,Namespace:kube-system,Attempt:0,}" Sep 12 23:07:14.055093 kubelet[2773]: E0912 23:07:14.055046 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:14.200226 containerd[1553]: time="2025-09-12T23:07:14.200135348Z" level=info msg="connecting to shim 8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067" address="unix:///run/containerd/s/1849996f7cb972ab4b3869e14fafd4eb9e9ac59a695e1c321e5bf711f186ae63" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:07:14.212869 containerd[1553]: time="2025-09-12T23:07:14.212792324Z" level=info msg="connecting to shim 7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe" address="unix:///run/containerd/s/ff6209bef4837e5b0993db7d2181597e6429c5f576e50b0799390c1e82aa4e42" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:07:14.230318 containerd[1553]: time="2025-09-12T23:07:14.229278471Z" level=info msg="connecting to shim 7ede01bc9cd8e0dc1c2974c83a32452dde7e757e287ff7b7b1dda04045102642" address="unix:///run/containerd/s/f8eface3316b601920a8266abc0c9a606fb50de00838759848eaec40c8da66f3" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:07:14.250908 systemd[1]: Started cri-containerd-8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067.scope - libcontainer container 8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067. Sep 12 23:07:14.299878 systemd[1]: Started cri-containerd-7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe.scope - libcontainer container 7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe. Sep 12 23:07:14.302026 systemd[1]: Started cri-containerd-7ede01bc9cd8e0dc1c2974c83a32452dde7e757e287ff7b7b1dda04045102642.scope - libcontainer container 7ede01bc9cd8e0dc1c2974c83a32452dde7e757e287ff7b7b1dda04045102642. Sep 12 23:07:14.312083 containerd[1553]: time="2025-09-12T23:07:14.311912603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-756k2,Uid:58cd7ad8-76b6-40d6-91d3-38f73e72e0bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\"" Sep 12 23:07:14.313014 kubelet[2773]: E0912 23:07:14.312977 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:14.316154 containerd[1553]: time="2025-09-12T23:07:14.316079858Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 23:07:14.352005 containerd[1553]: time="2025-09-12T23:07:14.351952006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ccsns,Uid:9dadb250-8bda-4836-993c-cd79417ff65f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ede01bc9cd8e0dc1c2974c83a32452dde7e757e287ff7b7b1dda04045102642\"" Sep 12 23:07:14.353974 kubelet[2773]: E0912 23:07:14.353407 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:14.363068 containerd[1553]: time="2025-09-12T23:07:14.363013901Z" level=info msg="CreateContainer within sandbox \"7ede01bc9cd8e0dc1c2974c83a32452dde7e757e287ff7b7b1dda04045102642\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:07:14.435352 containerd[1553]: time="2025-09-12T23:07:14.435301888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tzzwv,Uid:e82e2d13-8fbc-4a54-b626-ea2bf0511849,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe\"" Sep 12 23:07:14.435904 kubelet[2773]: E0912 23:07:14.435875 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:14.456809 containerd[1553]: time="2025-09-12T23:07:14.456733686Z" level=info msg="Container c0b8f7aa9e4fcc6e63e3e3c872b4fdfd8993e6e146cae1ac6369695d3bb99b5c: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:14.471500 containerd[1553]: time="2025-09-12T23:07:14.471419444Z" level=info msg="CreateContainer within sandbox \"7ede01bc9cd8e0dc1c2974c83a32452dde7e757e287ff7b7b1dda04045102642\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c0b8f7aa9e4fcc6e63e3e3c872b4fdfd8993e6e146cae1ac6369695d3bb99b5c\"" Sep 12 23:07:14.472526 containerd[1553]: time="2025-09-12T23:07:14.472474673Z" level=info msg="StartContainer for \"c0b8f7aa9e4fcc6e63e3e3c872b4fdfd8993e6e146cae1ac6369695d3bb99b5c\"" Sep 12 23:07:14.474397 containerd[1553]: time="2025-09-12T23:07:14.474344692Z" level=info msg="connecting to shim c0b8f7aa9e4fcc6e63e3e3c872b4fdfd8993e6e146cae1ac6369695d3bb99b5c" address="unix:///run/containerd/s/f8eface3316b601920a8266abc0c9a606fb50de00838759848eaec40c8da66f3" protocol=ttrpc version=3 Sep 12 23:07:14.502902 systemd[1]: Started cri-containerd-c0b8f7aa9e4fcc6e63e3e3c872b4fdfd8993e6e146cae1ac6369695d3bb99b5c.scope - libcontainer container c0b8f7aa9e4fcc6e63e3e3c872b4fdfd8993e6e146cae1ac6369695d3bb99b5c. Sep 12 23:07:14.571853 containerd[1553]: time="2025-09-12T23:07:14.571048434Z" level=info msg="StartContainer for \"c0b8f7aa9e4fcc6e63e3e3c872b4fdfd8993e6e146cae1ac6369695d3bb99b5c\" returns successfully" Sep 12 23:07:14.605099 kubelet[2773]: E0912 23:07:14.605040 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:15.306434 kubelet[2773]: E0912 23:07:15.306373 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:15.320980 kubelet[2773]: I0912 23:07:15.320905 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ccsns" podStartSLOduration=3.320884465 podStartE2EDuration="3.320884465s" podCreationTimestamp="2025-09-12 23:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:07:14.619836324 +0000 UTC m=+6.171838087" watchObservedRunningTime="2025-09-12 23:07:15.320884465 +0000 UTC m=+6.872886228" Sep 12 23:07:15.606439 kubelet[2773]: E0912 23:07:15.606191 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:16.607371 kubelet[2773]: E0912 23:07:16.607315 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:17.554557 kubelet[2773]: E0912 23:07:17.554505 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:17.608755 kubelet[2773]: E0912 23:07:17.608675 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:20.090154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862408308.mount: Deactivated successfully. Sep 12 23:07:26.802704 containerd[1553]: time="2025-09-12T23:07:26.802617651Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:26.823107 containerd[1553]: time="2025-09-12T23:07:26.823024860Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 23:07:26.861772 containerd[1553]: time="2025-09-12T23:07:26.861677671Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:26.863107 containerd[1553]: time="2025-09-12T23:07:26.863063602Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.546919475s" Sep 12 23:07:26.863107 containerd[1553]: time="2025-09-12T23:07:26.863101076Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 23:07:26.874563 containerd[1553]: time="2025-09-12T23:07:26.874298719Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 23:07:26.876118 containerd[1553]: time="2025-09-12T23:07:26.876045701Z" level=info msg="CreateContainer within sandbox \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 23:07:26.982101 containerd[1553]: time="2025-09-12T23:07:26.981982837Z" level=info msg="Container 41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:27.041221 containerd[1553]: time="2025-09-12T23:07:27.040843183Z" level=info msg="CreateContainer within sandbox \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\"" Sep 12 23:07:27.041513 containerd[1553]: time="2025-09-12T23:07:27.041477852Z" level=info msg="StartContainer for \"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\"" Sep 12 23:07:27.042693 containerd[1553]: time="2025-09-12T23:07:27.042657804Z" level=info msg="connecting to shim 41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee" address="unix:///run/containerd/s/1849996f7cb972ab4b3869e14fafd4eb9e9ac59a695e1c321e5bf711f186ae63" protocol=ttrpc version=3 Sep 12 23:07:27.080849 systemd[1]: Started cri-containerd-41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee.scope - libcontainer container 41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee. Sep 12 23:07:27.130607 systemd[1]: cri-containerd-41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee.scope: Deactivated successfully. Sep 12 23:07:27.134121 containerd[1553]: time="2025-09-12T23:07:27.134045898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\" id:\"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\" pid:3197 exited_at:{seconds:1757718447 nanos:133390950}" Sep 12 23:07:27.514829 containerd[1553]: time="2025-09-12T23:07:27.514677556Z" level=info msg="received exit event container_id:\"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\" id:\"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\" pid:3197 exited_at:{seconds:1757718447 nanos:133390950}" Sep 12 23:07:27.516068 containerd[1553]: time="2025-09-12T23:07:27.515941523Z" level=info msg="StartContainer for \"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\" returns successfully" Sep 12 23:07:27.539295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee-rootfs.mount: Deactivated successfully. Sep 12 23:07:27.747734 kubelet[2773]: E0912 23:07:27.747698 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:28.751342 kubelet[2773]: E0912 23:07:28.751287 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:28.755565 containerd[1553]: time="2025-09-12T23:07:28.753787295Z" level=info msg="CreateContainer within sandbox \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 23:07:28.790596 containerd[1553]: time="2025-09-12T23:07:28.790525581Z" level=info msg="Container 5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:28.794736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3414071604.mount: Deactivated successfully. Sep 12 23:07:28.813612 containerd[1553]: time="2025-09-12T23:07:28.813527510Z" level=info msg="CreateContainer within sandbox \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\"" Sep 12 23:07:28.814410 containerd[1553]: time="2025-09-12T23:07:28.814369013Z" level=info msg="StartContainer for \"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\"" Sep 12 23:07:28.815353 containerd[1553]: time="2025-09-12T23:07:28.815305332Z" level=info msg="connecting to shim 5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb" address="unix:///run/containerd/s/1849996f7cb972ab4b3869e14fafd4eb9e9ac59a695e1c321e5bf711f186ae63" protocol=ttrpc version=3 Sep 12 23:07:28.843815 systemd[1]: Started cri-containerd-5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb.scope - libcontainer container 5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb. Sep 12 23:07:28.876444 containerd[1553]: time="2025-09-12T23:07:28.876400609Z" level=info msg="StartContainer for \"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\" returns successfully" Sep 12 23:07:28.891975 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:07:28.892216 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:07:28.892398 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:07:28.893978 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:07:28.895898 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 23:07:28.897800 systemd[1]: cri-containerd-5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb.scope: Deactivated successfully. Sep 12 23:07:28.899248 containerd[1553]: time="2025-09-12T23:07:28.898664808Z" level=info msg="received exit event container_id:\"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\" id:\"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\" pid:3241 exited_at:{seconds:1757718448 nanos:898297958}" Sep 12 23:07:28.899798 containerd[1553]: time="2025-09-12T23:07:28.899762365Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\" id:\"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\" pid:3241 exited_at:{seconds:1757718448 nanos:898297958}" Sep 12 23:07:28.926493 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:07:29.755043 kubelet[2773]: E0912 23:07:29.754979 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:29.756794 containerd[1553]: time="2025-09-12T23:07:29.756703164Z" level=info msg="CreateContainer within sandbox \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 23:07:29.791967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb-rootfs.mount: Deactivated successfully. Sep 12 23:07:29.986481 containerd[1553]: time="2025-09-12T23:07:29.986420188Z" level=info msg="Container 9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:29.997371 containerd[1553]: time="2025-09-12T23:07:29.997298070Z" level=info msg="CreateContainer within sandbox \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\"" Sep 12 23:07:29.997897 containerd[1553]: time="2025-09-12T23:07:29.997863249Z" level=info msg="StartContainer for \"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\"" Sep 12 23:07:29.999441 containerd[1553]: time="2025-09-12T23:07:29.999396868Z" level=info msg="connecting to shim 9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6" address="unix:///run/containerd/s/1849996f7cb972ab4b3869e14fafd4eb9e9ac59a695e1c321e5bf711f186ae63" protocol=ttrpc version=3 Sep 12 23:07:30.028894 systemd[1]: Started cri-containerd-9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6.scope - libcontainer container 9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6. Sep 12 23:07:30.081585 systemd[1]: cri-containerd-9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6.scope: Deactivated successfully. Sep 12 23:07:30.082847 containerd[1553]: time="2025-09-12T23:07:30.082808936Z" level=info msg="StartContainer for \"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\" returns successfully" Sep 12 23:07:30.083247 containerd[1553]: time="2025-09-12T23:07:30.083201736Z" level=info msg="received exit event container_id:\"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\" id:\"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\" pid:3290 exited_at:{seconds:1757718450 nanos:82891567}" Sep 12 23:07:30.083334 containerd[1553]: time="2025-09-12T23:07:30.083253497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\" id:\"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\" pid:3290 exited_at:{seconds:1757718450 nanos:82891567}" Sep 12 23:07:30.111604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6-rootfs.mount: Deactivated successfully. Sep 12 23:07:30.720457 containerd[1553]: time="2025-09-12T23:07:30.720391380Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:30.721252 containerd[1553]: time="2025-09-12T23:07:30.721221737Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 23:07:30.722698 containerd[1553]: time="2025-09-12T23:07:30.722670786Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:30.724218 containerd[1553]: time="2025-09-12T23:07:30.724186517Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.849839252s" Sep 12 23:07:30.724289 containerd[1553]: time="2025-09-12T23:07:30.724219882Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 23:07:30.726439 containerd[1553]: time="2025-09-12T23:07:30.726393712Z" level=info msg="CreateContainer within sandbox \"7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 23:07:30.736349 containerd[1553]: time="2025-09-12T23:07:30.736296135Z" level=info msg="Container 0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:30.743902 containerd[1553]: time="2025-09-12T23:07:30.743865557Z" level=info msg="CreateContainer within sandbox \"7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\"" Sep 12 23:07:30.744419 containerd[1553]: time="2025-09-12T23:07:30.744390697Z" level=info msg="StartContainer for \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\"" Sep 12 23:07:30.745355 containerd[1553]: time="2025-09-12T23:07:30.745325708Z" level=info msg="connecting to shim 0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee" address="unix:///run/containerd/s/ff6209bef4837e5b0993db7d2181597e6429c5f576e50b0799390c1e82aa4e42" protocol=ttrpc version=3 Sep 12 23:07:30.764001 kubelet[2773]: E0912 23:07:30.763962 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:30.772300 containerd[1553]: time="2025-09-12T23:07:30.772259094Z" level=info msg="CreateContainer within sandbox \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 23:07:30.772708 systemd[1]: Started cri-containerd-0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee.scope - libcontainer container 0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee. Sep 12 23:07:30.795506 containerd[1553]: time="2025-09-12T23:07:30.795439897Z" level=info msg="Container 5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:30.805315 containerd[1553]: time="2025-09-12T23:07:30.805243036Z" level=info msg="CreateContainer within sandbox \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\"" Sep 12 23:07:30.806332 containerd[1553]: time="2025-09-12T23:07:30.806289035Z" level=info msg="StartContainer for \"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\"" Sep 12 23:07:30.807731 containerd[1553]: time="2025-09-12T23:07:30.807691523Z" level=info msg="connecting to shim 5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce" address="unix:///run/containerd/s/1849996f7cb972ab4b3869e14fafd4eb9e9ac59a695e1c321e5bf711f186ae63" protocol=ttrpc version=3 Sep 12 23:07:30.834000 systemd[1]: Started cri-containerd-5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce.scope - libcontainer container 5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce. Sep 12 23:07:30.841796 containerd[1553]: time="2025-09-12T23:07:30.841741253Z" level=info msg="StartContainer for \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\" returns successfully" Sep 12 23:07:30.871230 systemd[1]: cri-containerd-5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce.scope: Deactivated successfully. Sep 12 23:07:30.872905 containerd[1553]: time="2025-09-12T23:07:30.872861603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\" id:\"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\" pid:3372 exited_at:{seconds:1757718450 nanos:871854810}" Sep 12 23:07:30.874127 containerd[1553]: time="2025-09-12T23:07:30.874089089Z" level=info msg="received exit event container_id:\"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\" id:\"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\" pid:3372 exited_at:{seconds:1757718450 nanos:871854810}" Sep 12 23:07:30.886327 containerd[1553]: time="2025-09-12T23:07:30.886269425Z" level=info msg="StartContainer for \"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\" returns successfully" Sep 12 23:07:30.907049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce-rootfs.mount: Deactivated successfully. Sep 12 23:07:31.779522 kubelet[2773]: E0912 23:07:31.779454 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:31.780704 kubelet[2773]: E0912 23:07:31.780485 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:31.786131 containerd[1553]: time="2025-09-12T23:07:31.785346524Z" level=info msg="CreateContainer within sandbox \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 23:07:32.056430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount83195645.mount: Deactivated successfully. Sep 12 23:07:32.058655 containerd[1553]: time="2025-09-12T23:07:32.058614006Z" level=info msg="Container 5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:32.252216 containerd[1553]: time="2025-09-12T23:07:32.252128749Z" level=info msg="CreateContainer within sandbox \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\"" Sep 12 23:07:32.252854 containerd[1553]: time="2025-09-12T23:07:32.252782427Z" level=info msg="StartContainer for \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\"" Sep 12 23:07:32.254230 containerd[1553]: time="2025-09-12T23:07:32.254191090Z" level=info msg="connecting to shim 5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3" address="unix:///run/containerd/s/1849996f7cb972ab4b3869e14fafd4eb9e9ac59a695e1c321e5bf711f186ae63" protocol=ttrpc version=3 Sep 12 23:07:32.279701 systemd[1]: Started cri-containerd-5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3.scope - libcontainer container 5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3. Sep 12 23:07:32.535736 containerd[1553]: time="2025-09-12T23:07:32.535680717Z" level=info msg="StartContainer for \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" returns successfully" Sep 12 23:07:32.694587 kubelet[2773]: I0912 23:07:32.693056 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-tzzwv" podStartSLOduration=3.404914725 podStartE2EDuration="19.693035947s" podCreationTimestamp="2025-09-12 23:07:13 +0000 UTC" firstStartedPulling="2025-09-12 23:07:14.436954006 +0000 UTC m=+5.988955769" lastFinishedPulling="2025-09-12 23:07:30.725075228 +0000 UTC m=+22.277076991" observedRunningTime="2025-09-12 23:07:31.948982037 +0000 UTC m=+23.500983800" watchObservedRunningTime="2025-09-12 23:07:32.693035947 +0000 UTC m=+24.245037740" Sep 12 23:07:32.703565 containerd[1553]: time="2025-09-12T23:07:32.702898048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" id:\"782aa97e4f75c268439bdb4955b68bce0ba4f480faf88efed047fff73691e554\" pid:3457 exited_at:{seconds:1757718452 nanos:701685407}" Sep 12 23:07:32.768825 kubelet[2773]: I0912 23:07:32.768786 2773 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 23:07:32.799952 kubelet[2773]: E0912 23:07:32.799845 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:32.801017 kubelet[2773]: E0912 23:07:32.800584 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:33.251966 kubelet[2773]: I0912 23:07:33.250622 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-756k2" podStartSLOduration=8.6922533 podStartE2EDuration="21.250596869s" podCreationTimestamp="2025-09-12 23:07:12 +0000 UTC" firstStartedPulling="2025-09-12 23:07:14.315528069 +0000 UTC m=+5.867529822" lastFinishedPulling="2025-09-12 23:07:26.873871628 +0000 UTC m=+18.425873391" observedRunningTime="2025-09-12 23:07:33.074739043 +0000 UTC m=+24.626740836" watchObservedRunningTime="2025-09-12 23:07:33.250596869 +0000 UTC m=+24.802598633" Sep 12 23:07:33.259448 systemd[1]: Created slice kubepods-burstable-pod3d8c0ef8_54fb_4116_ad8e_04d4dd1ac0a0.slice - libcontainer container kubepods-burstable-pod3d8c0ef8_54fb_4116_ad8e_04d4dd1ac0a0.slice. Sep 12 23:07:33.272694 systemd[1]: Created slice kubepods-burstable-pod63dcd82a_382b_44c9_a533_8f29401378cb.slice - libcontainer container kubepods-burstable-pod63dcd82a_382b_44c9_a533_8f29401378cb.slice. Sep 12 23:07:33.334431 kubelet[2773]: I0912 23:07:33.334322 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d8c0ef8-54fb-4116-ad8e-04d4dd1ac0a0-config-volume\") pod \"coredns-668d6bf9bc-45lnk\" (UID: \"3d8c0ef8-54fb-4116-ad8e-04d4dd1ac0a0\") " pod="kube-system/coredns-668d6bf9bc-45lnk" Sep 12 23:07:33.334431 kubelet[2773]: I0912 23:07:33.334420 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmgm8\" (UniqueName: \"kubernetes.io/projected/3d8c0ef8-54fb-4116-ad8e-04d4dd1ac0a0-kube-api-access-gmgm8\") pod \"coredns-668d6bf9bc-45lnk\" (UID: \"3d8c0ef8-54fb-4116-ad8e-04d4dd1ac0a0\") " pod="kube-system/coredns-668d6bf9bc-45lnk" Sep 12 23:07:33.435246 kubelet[2773]: I0912 23:07:33.435149 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgdcw\" (UniqueName: \"kubernetes.io/projected/63dcd82a-382b-44c9-a533-8f29401378cb-kube-api-access-qgdcw\") pod \"coredns-668d6bf9bc-v64xb\" (UID: \"63dcd82a-382b-44c9-a533-8f29401378cb\") " pod="kube-system/coredns-668d6bf9bc-v64xb" Sep 12 23:07:33.435446 kubelet[2773]: I0912 23:07:33.435284 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63dcd82a-382b-44c9-a533-8f29401378cb-config-volume\") pod \"coredns-668d6bf9bc-v64xb\" (UID: \"63dcd82a-382b-44c9-a533-8f29401378cb\") " pod="kube-system/coredns-668d6bf9bc-v64xb" Sep 12 23:07:33.565009 kubelet[2773]: E0912 23:07:33.564878 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:33.565837 containerd[1553]: time="2025-09-12T23:07:33.565773002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-45lnk,Uid:3d8c0ef8-54fb-4116-ad8e-04d4dd1ac0a0,Namespace:kube-system,Attempt:0,}" Sep 12 23:07:33.579565 kubelet[2773]: E0912 23:07:33.577946 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:33.579715 containerd[1553]: time="2025-09-12T23:07:33.578791057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v64xb,Uid:63dcd82a-382b-44c9-a533-8f29401378cb,Namespace:kube-system,Attempt:0,}" Sep 12 23:07:33.802064 kubelet[2773]: E0912 23:07:33.802016 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:34.721398 systemd-networkd[1465]: cilium_host: Link UP Sep 12 23:07:34.721770 systemd-networkd[1465]: cilium_net: Link UP Sep 12 23:07:34.722142 systemd-networkd[1465]: cilium_net: Gained carrier Sep 12 23:07:34.722423 systemd-networkd[1465]: cilium_host: Gained carrier Sep 12 23:07:34.815513 kubelet[2773]: E0912 23:07:34.815380 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:34.861723 systemd-networkd[1465]: cilium_vxlan: Link UP Sep 12 23:07:34.861737 systemd-networkd[1465]: cilium_vxlan: Gained carrier Sep 12 23:07:35.121582 kernel: NET: Registered PF_ALG protocol family Sep 12 23:07:35.497655 systemd-networkd[1465]: cilium_host: Gained IPv6LL Sep 12 23:07:35.558812 systemd-networkd[1465]: cilium_net: Gained IPv6LL Sep 12 23:07:35.915779 systemd-networkd[1465]: lxc_health: Link UP Sep 12 23:07:35.916246 systemd-networkd[1465]: lxc_health: Gained carrier Sep 12 23:07:36.104110 systemd-networkd[1465]: lxcebdeb38856ab: Link UP Sep 12 23:07:36.105789 kernel: eth0: renamed from tmpd676a Sep 12 23:07:36.110264 systemd-networkd[1465]: lxcebdeb38856ab: Gained carrier Sep 12 23:07:36.125217 systemd-networkd[1465]: lxc0d98fe607539: Link UP Sep 12 23:07:36.134586 kernel: eth0: renamed from tmp805ea Sep 12 23:07:36.138670 systemd-networkd[1465]: lxc0d98fe607539: Gained carrier Sep 12 23:07:36.646870 systemd-networkd[1465]: cilium_vxlan: Gained IPv6LL Sep 12 23:07:37.292824 kubelet[2773]: E0912 23:07:37.292777 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:37.351940 systemd-networkd[1465]: lxcebdeb38856ab: Gained IPv6LL Sep 12 23:07:37.415784 systemd-networkd[1465]: lxc_health: Gained IPv6LL Sep 12 23:07:37.799843 systemd-networkd[1465]: lxc0d98fe607539: Gained IPv6LL Sep 12 23:07:37.821896 kubelet[2773]: E0912 23:07:37.821848 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:38.823939 kubelet[2773]: E0912 23:07:38.823901 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:40.483898 containerd[1553]: time="2025-09-12T23:07:40.483179712Z" level=info msg="connecting to shim d676ad7c53a8cf33842c8a2d11258054bba4ee8a93d9569a6c353fade3de3054" address="unix:///run/containerd/s/f21fa5dfee7e50d862ec354e9d6549c813e0407ea095b8fc5be6652a01ad426c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:07:40.485424 containerd[1553]: time="2025-09-12T23:07:40.485324898Z" level=info msg="connecting to shim 805eac293f236365226b274ec2c6389a4a1202de6ac98b00ba7d4af515e53c1a" address="unix:///run/containerd/s/dbf705d236bb82d48f08ba47d8b96722fdff4de6b630108fdc39736ae625f2c7" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:07:40.533881 systemd[1]: Started cri-containerd-805eac293f236365226b274ec2c6389a4a1202de6ac98b00ba7d4af515e53c1a.scope - libcontainer container 805eac293f236365226b274ec2c6389a4a1202de6ac98b00ba7d4af515e53c1a. Sep 12 23:07:40.538842 systemd[1]: Started cri-containerd-d676ad7c53a8cf33842c8a2d11258054bba4ee8a93d9569a6c353fade3de3054.scope - libcontainer container d676ad7c53a8cf33842c8a2d11258054bba4ee8a93d9569a6c353fade3de3054. Sep 12 23:07:40.557806 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:07:40.562166 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:07:40.608813 containerd[1553]: time="2025-09-12T23:07:40.608739317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v64xb,Uid:63dcd82a-382b-44c9-a533-8f29401378cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"805eac293f236365226b274ec2c6389a4a1202de6ac98b00ba7d4af515e53c1a\"" Sep 12 23:07:40.610698 kubelet[2773]: E0912 23:07:40.609905 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:40.620167 containerd[1553]: time="2025-09-12T23:07:40.620067336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-45lnk,Uid:3d8c0ef8-54fb-4116-ad8e-04d4dd1ac0a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d676ad7c53a8cf33842c8a2d11258054bba4ee8a93d9569a6c353fade3de3054\"" Sep 12 23:07:40.621065 kubelet[2773]: E0912 23:07:40.621029 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:40.622949 containerd[1553]: time="2025-09-12T23:07:40.622890238Z" level=info msg="CreateContainer within sandbox \"805eac293f236365226b274ec2c6389a4a1202de6ac98b00ba7d4af515e53c1a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:07:40.623818 containerd[1553]: time="2025-09-12T23:07:40.623761119Z" level=info msg="CreateContainer within sandbox \"d676ad7c53a8cf33842c8a2d11258054bba4ee8a93d9569a6c353fade3de3054\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:07:40.704730 containerd[1553]: time="2025-09-12T23:07:40.704592516Z" level=info msg="Container cf76b2abc332778317921fec14d1bff901bceae85bc00836f45843f6b657a6c4: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:40.718795 containerd[1553]: time="2025-09-12T23:07:40.718703088Z" level=info msg="Container 1a25c7a5483698cf0d51cf7badb69d0e7ab3d7fcfffc53c1b1127afa29631b14: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:40.748963 containerd[1553]: time="2025-09-12T23:07:40.746893490Z" level=info msg="CreateContainer within sandbox \"d676ad7c53a8cf33842c8a2d11258054bba4ee8a93d9569a6c353fade3de3054\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf76b2abc332778317921fec14d1bff901bceae85bc00836f45843f6b657a6c4\"" Sep 12 23:07:40.749176 containerd[1553]: time="2025-09-12T23:07:40.749012676Z" level=info msg="StartContainer for \"cf76b2abc332778317921fec14d1bff901bceae85bc00836f45843f6b657a6c4\"" Sep 12 23:07:40.749406 containerd[1553]: time="2025-09-12T23:07:40.749342106Z" level=info msg="CreateContainer within sandbox \"805eac293f236365226b274ec2c6389a4a1202de6ac98b00ba7d4af515e53c1a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a25c7a5483698cf0d51cf7badb69d0e7ab3d7fcfffc53c1b1127afa29631b14\"" Sep 12 23:07:40.750388 containerd[1553]: time="2025-09-12T23:07:40.750350484Z" level=info msg="StartContainer for \"1a25c7a5483698cf0d51cf7badb69d0e7ab3d7fcfffc53c1b1127afa29631b14\"" Sep 12 23:07:40.751095 containerd[1553]: time="2025-09-12T23:07:40.751067366Z" level=info msg="connecting to shim cf76b2abc332778317921fec14d1bff901bceae85bc00836f45843f6b657a6c4" address="unix:///run/containerd/s/f21fa5dfee7e50d862ec354e9d6549c813e0407ea095b8fc5be6652a01ad426c" protocol=ttrpc version=3 Sep 12 23:07:40.751462 containerd[1553]: time="2025-09-12T23:07:40.751420271Z" level=info msg="connecting to shim 1a25c7a5483698cf0d51cf7badb69d0e7ab3d7fcfffc53c1b1127afa29631b14" address="unix:///run/containerd/s/dbf705d236bb82d48f08ba47d8b96722fdff4de6b630108fdc39736ae625f2c7" protocol=ttrpc version=3 Sep 12 23:07:40.789900 systemd[1]: Started cri-containerd-cf76b2abc332778317921fec14d1bff901bceae85bc00836f45843f6b657a6c4.scope - libcontainer container cf76b2abc332778317921fec14d1bff901bceae85bc00836f45843f6b657a6c4. Sep 12 23:07:40.794411 systemd[1]: Started cri-containerd-1a25c7a5483698cf0d51cf7badb69d0e7ab3d7fcfffc53c1b1127afa29631b14.scope - libcontainer container 1a25c7a5483698cf0d51cf7badb69d0e7ab3d7fcfffc53c1b1127afa29631b14. Sep 12 23:07:40.854364 containerd[1553]: time="2025-09-12T23:07:40.854300616Z" level=info msg="StartContainer for \"1a25c7a5483698cf0d51cf7badb69d0e7ab3d7fcfffc53c1b1127afa29631b14\" returns successfully" Sep 12 23:07:40.862356 containerd[1553]: time="2025-09-12T23:07:40.862266706Z" level=info msg="StartContainer for \"cf76b2abc332778317921fec14d1bff901bceae85bc00836f45843f6b657a6c4\" returns successfully" Sep 12 23:07:41.455433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154496531.mount: Deactivated successfully. Sep 12 23:07:41.848068 kubelet[2773]: E0912 23:07:41.847885 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:41.853222 kubelet[2773]: E0912 23:07:41.853169 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:41.888045 kubelet[2773]: I0912 23:07:41.887957 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-45lnk" podStartSLOduration=28.88793079 podStartE2EDuration="28.88793079s" podCreationTimestamp="2025-09-12 23:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:07:41.887255279 +0000 UTC m=+33.439257062" watchObservedRunningTime="2025-09-12 23:07:41.88793079 +0000 UTC m=+33.439932573" Sep 12 23:07:41.888248 kubelet[2773]: I0912 23:07:41.888096 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v64xb" podStartSLOduration=28.888086261 podStartE2EDuration="28.888086261s" podCreationTimestamp="2025-09-12 23:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:07:41.871371413 +0000 UTC m=+33.423373196" watchObservedRunningTime="2025-09-12 23:07:41.888086261 +0000 UTC m=+33.440088145" Sep 12 23:07:42.855652 kubelet[2773]: E0912 23:07:42.855609 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:42.856125 kubelet[2773]: E0912 23:07:42.855840 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:43.860772 kubelet[2773]: E0912 23:07:43.860141 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:43.860772 kubelet[2773]: E0912 23:07:43.860340 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:44.098099 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:44534.service - OpenSSH per-connection server daemon (10.0.0.1:44534). Sep 12 23:07:44.182248 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 44534 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:44.184838 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:44.191109 systemd-logind[1543]: New session 10 of user core. Sep 12 23:07:44.201735 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 23:07:44.341913 sshd[4094]: Connection closed by 10.0.0.1 port 44534 Sep 12 23:07:44.342336 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:44.346788 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:44534.service: Deactivated successfully. Sep 12 23:07:44.349045 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 23:07:44.351318 systemd-logind[1543]: Session 10 logged out. Waiting for processes to exit. Sep 12 23:07:44.352910 systemd-logind[1543]: Removed session 10. Sep 12 23:07:49.361747 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:44546.service - OpenSSH per-connection server daemon (10.0.0.1:44546). Sep 12 23:07:49.431665 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 44546 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:49.433307 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:49.437983 systemd-logind[1543]: New session 11 of user core. Sep 12 23:07:49.447691 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 23:07:49.566725 sshd[4114]: Connection closed by 10.0.0.1 port 44546 Sep 12 23:07:49.567446 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:49.574196 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:44546.service: Deactivated successfully. Sep 12 23:07:49.577095 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 23:07:49.578602 systemd-logind[1543]: Session 11 logged out. Waiting for processes to exit. Sep 12 23:07:49.581072 systemd-logind[1543]: Removed session 11. Sep 12 23:07:54.583959 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:40294.service - OpenSSH per-connection server daemon (10.0.0.1:40294). Sep 12 23:07:54.658041 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 40294 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:54.660278 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:54.666382 systemd-logind[1543]: New session 12 of user core. Sep 12 23:07:54.672718 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 23:07:54.801627 sshd[4131]: Connection closed by 10.0.0.1 port 40294 Sep 12 23:07:54.802009 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:54.806461 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:40294.service: Deactivated successfully. Sep 12 23:07:54.809001 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 23:07:54.809879 systemd-logind[1543]: Session 12 logged out. Waiting for processes to exit. Sep 12 23:07:54.811188 systemd-logind[1543]: Removed session 12. Sep 12 23:07:59.819435 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:40310.service - OpenSSH per-connection server daemon (10.0.0.1:40310). Sep 12 23:07:59.880139 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 40310 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:59.881776 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:59.886575 systemd-logind[1543]: New session 13 of user core. Sep 12 23:07:59.896805 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 23:08:00.015888 sshd[4149]: Connection closed by 10.0.0.1 port 40310 Sep 12 23:08:00.016353 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:00.020826 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:40310.service: Deactivated successfully. Sep 12 23:08:00.023076 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 23:08:00.024080 systemd-logind[1543]: Session 13 logged out. Waiting for processes to exit. Sep 12 23:08:00.025585 systemd-logind[1543]: Removed session 13. Sep 12 23:08:05.039826 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:38826.service - OpenSSH per-connection server daemon (10.0.0.1:38826). Sep 12 23:08:05.108187 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 38826 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:05.109913 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:05.115288 systemd-logind[1543]: New session 14 of user core. Sep 12 23:08:05.125775 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 23:08:05.240677 sshd[4166]: Connection closed by 10.0.0.1 port 38826 Sep 12 23:08:05.241281 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:05.256264 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:38826.service: Deactivated successfully. Sep 12 23:08:05.259310 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 23:08:05.260483 systemd-logind[1543]: Session 14 logged out. Waiting for processes to exit. Sep 12 23:08:05.266129 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:38830.service - OpenSSH per-connection server daemon (10.0.0.1:38830). Sep 12 23:08:05.266941 systemd-logind[1543]: Removed session 14. Sep 12 23:08:05.336009 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 38830 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:05.337762 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:05.342107 systemd-logind[1543]: New session 15 of user core. Sep 12 23:08:05.352783 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 23:08:05.513654 sshd[4183]: Connection closed by 10.0.0.1 port 38830 Sep 12 23:08:05.514034 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:05.526679 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:38830.service: Deactivated successfully. Sep 12 23:08:05.529780 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 23:08:05.532657 systemd-logind[1543]: Session 15 logged out. Waiting for processes to exit. Sep 12 23:08:05.541604 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:38836.service - OpenSSH per-connection server daemon (10.0.0.1:38836). Sep 12 23:08:05.543635 systemd-logind[1543]: Removed session 15. Sep 12 23:08:05.607296 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 38836 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:05.609178 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:05.614497 systemd-logind[1543]: New session 16 of user core. Sep 12 23:08:05.630864 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 23:08:05.756680 sshd[4197]: Connection closed by 10.0.0.1 port 38836 Sep 12 23:08:05.757124 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:05.761228 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:38836.service: Deactivated successfully. Sep 12 23:08:05.763705 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 23:08:05.766063 systemd-logind[1543]: Session 16 logged out. Waiting for processes to exit. Sep 12 23:08:05.768385 systemd-logind[1543]: Removed session 16. Sep 12 23:08:10.773694 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:58192.service - OpenSSH per-connection server daemon (10.0.0.1:58192). Sep 12 23:08:10.831836 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 58192 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:10.833344 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:10.838199 systemd-logind[1543]: New session 17 of user core. Sep 12 23:08:10.850686 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 23:08:10.968318 sshd[4217]: Connection closed by 10.0.0.1 port 58192 Sep 12 23:08:10.968773 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:10.974271 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:58192.service: Deactivated successfully. Sep 12 23:08:10.976742 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 23:08:10.977525 systemd-logind[1543]: Session 17 logged out. Waiting for processes to exit. Sep 12 23:08:10.979446 systemd-logind[1543]: Removed session 17. Sep 12 23:08:15.986388 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:58228.service - OpenSSH per-connection server daemon (10.0.0.1:58228). Sep 12 23:08:16.053690 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 58228 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:16.055322 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:16.061185 systemd-logind[1543]: New session 18 of user core. Sep 12 23:08:16.072813 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 23:08:16.183134 sshd[4235]: Connection closed by 10.0.0.1 port 58228 Sep 12 23:08:16.183606 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:16.196481 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:58228.service: Deactivated successfully. Sep 12 23:08:16.198833 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 23:08:16.199722 systemd-logind[1543]: Session 18 logged out. Waiting for processes to exit. Sep 12 23:08:16.202881 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:58238.service - OpenSSH per-connection server daemon (10.0.0.1:58238). Sep 12 23:08:16.203483 systemd-logind[1543]: Removed session 18. Sep 12 23:08:16.262050 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 58238 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:16.263995 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:16.269070 systemd-logind[1543]: New session 19 of user core. Sep 12 23:08:16.279713 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 23:08:16.580592 kubelet[2773]: E0912 23:08:16.580414 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:16.743903 sshd[4252]: Connection closed by 10.0.0.1 port 58238 Sep 12 23:08:16.744434 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:16.753651 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:58238.service: Deactivated successfully. Sep 12 23:08:16.755740 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 23:08:16.756585 systemd-logind[1543]: Session 19 logged out. Waiting for processes to exit. Sep 12 23:08:16.759559 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:58244.service - OpenSSH per-connection server daemon (10.0.0.1:58244). Sep 12 23:08:16.760709 systemd-logind[1543]: Removed session 19. Sep 12 23:08:16.833460 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 58244 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:16.835352 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:16.840572 systemd-logind[1543]: New session 20 of user core. Sep 12 23:08:16.850805 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 23:08:17.573858 sshd[4268]: Connection closed by 10.0.0.1 port 58244 Sep 12 23:08:17.575158 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:17.587516 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:58244.service: Deactivated successfully. Sep 12 23:08:17.590389 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 23:08:17.591321 systemd-logind[1543]: Session 20 logged out. Waiting for processes to exit. Sep 12 23:08:17.595013 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:58246.service - OpenSSH per-connection server daemon (10.0.0.1:58246). Sep 12 23:08:17.596644 systemd-logind[1543]: Removed session 20. Sep 12 23:08:17.655520 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 58246 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:17.657578 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:17.663523 systemd-logind[1543]: New session 21 of user core. Sep 12 23:08:17.672805 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 23:08:17.926846 sshd[4289]: Connection closed by 10.0.0.1 port 58246 Sep 12 23:08:17.927786 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:17.941757 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:58246.service: Deactivated successfully. Sep 12 23:08:17.943862 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 23:08:17.944797 systemd-logind[1543]: Session 21 logged out. Waiting for processes to exit. Sep 12 23:08:17.948088 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:58248.service - OpenSSH per-connection server daemon (10.0.0.1:58248). Sep 12 23:08:17.949042 systemd-logind[1543]: Removed session 21. Sep 12 23:08:18.010362 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 58248 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:18.012037 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:18.017845 systemd-logind[1543]: New session 22 of user core. Sep 12 23:08:18.021750 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 23:08:18.242648 sshd[4303]: Connection closed by 10.0.0.1 port 58248 Sep 12 23:08:18.243018 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:18.247874 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:58248.service: Deactivated successfully. Sep 12 23:08:18.250114 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 23:08:18.251091 systemd-logind[1543]: Session 22 logged out. Waiting for processes to exit. Sep 12 23:08:18.252571 systemd-logind[1543]: Removed session 22. Sep 12 23:08:18.575063 kubelet[2773]: E0912 23:08:18.574919 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:23.267898 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:35244.service - OpenSSH per-connection server daemon (10.0.0.1:35244). Sep 12 23:08:23.336104 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 35244 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:23.338232 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:23.343625 systemd-logind[1543]: New session 23 of user core. Sep 12 23:08:23.353848 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 23:08:23.464895 sshd[4321]: Connection closed by 10.0.0.1 port 35244 Sep 12 23:08:23.465219 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:23.469305 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:35244.service: Deactivated successfully. Sep 12 23:08:23.471422 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 23:08:23.472248 systemd-logind[1543]: Session 23 logged out. Waiting for processes to exit. Sep 12 23:08:23.473466 systemd-logind[1543]: Removed session 23. Sep 12 23:08:28.482973 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:35342.service - OpenSSH per-connection server daemon (10.0.0.1:35342). Sep 12 23:08:28.545956 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 35342 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:28.547722 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:28.553134 systemd-logind[1543]: New session 24 of user core. Sep 12 23:08:28.568736 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 23:08:28.695453 sshd[4340]: Connection closed by 10.0.0.1 port 35342 Sep 12 23:08:28.695959 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:28.702617 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:35342.service: Deactivated successfully. Sep 12 23:08:28.705950 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 23:08:28.707051 systemd-logind[1543]: Session 24 logged out. Waiting for processes to exit. Sep 12 23:08:28.708733 systemd-logind[1543]: Removed session 24. Sep 12 23:08:33.713242 systemd[1]: Started sshd@24-10.0.0.139:22-10.0.0.1:33498.service - OpenSSH per-connection server daemon (10.0.0.1:33498). Sep 12 23:08:33.769972 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 33498 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:33.771423 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:33.776220 systemd-logind[1543]: New session 25 of user core. Sep 12 23:08:33.784701 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 23:08:33.897582 sshd[4356]: Connection closed by 10.0.0.1 port 33498 Sep 12 23:08:33.898141 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:33.903356 systemd[1]: sshd@24-10.0.0.139:22-10.0.0.1:33498.service: Deactivated successfully. Sep 12 23:08:33.906050 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 23:08:33.906918 systemd-logind[1543]: Session 25 logged out. Waiting for processes to exit. Sep 12 23:08:33.908625 systemd-logind[1543]: Removed session 25. Sep 12 23:08:36.575213 kubelet[2773]: E0912 23:08:36.575138 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:38.914631 systemd[1]: Started sshd@25-10.0.0.139:22-10.0.0.1:33500.service - OpenSSH per-connection server daemon (10.0.0.1:33500). Sep 12 23:08:38.977899 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 33500 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:38.979816 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:38.984767 systemd-logind[1543]: New session 26 of user core. Sep 12 23:08:38.994690 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 23:08:39.112737 sshd[4372]: Connection closed by 10.0.0.1 port 33500 Sep 12 23:08:39.113381 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:39.126131 systemd[1]: sshd@25-10.0.0.139:22-10.0.0.1:33500.service: Deactivated successfully. Sep 12 23:08:39.128287 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 23:08:39.129219 systemd-logind[1543]: Session 26 logged out. Waiting for processes to exit. Sep 12 23:08:39.131997 systemd[1]: Started sshd@26-10.0.0.139:22-10.0.0.1:33502.service - OpenSSH per-connection server daemon (10.0.0.1:33502). Sep 12 23:08:39.132875 systemd-logind[1543]: Removed session 26. Sep 12 23:08:39.196505 sshd[4385]: Accepted publickey for core from 10.0.0.1 port 33502 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:39.198389 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:39.203327 systemd-logind[1543]: New session 27 of user core. Sep 12 23:08:39.214680 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 23:08:41.224955 containerd[1553]: time="2025-09-12T23:08:41.224767903Z" level=info msg="StopContainer for \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\" with timeout 30 (s)" Sep 12 23:08:41.234866 containerd[1553]: time="2025-09-12T23:08:41.234716703Z" level=info msg="Stop container \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\" with signal terminated" Sep 12 23:08:41.250355 systemd[1]: cri-containerd-0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee.scope: Deactivated successfully. Sep 12 23:08:41.258012 containerd[1553]: time="2025-09-12T23:08:41.257798452Z" level=info msg="received exit event container_id:\"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\" id:\"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\" pid:3345 exited_at:{seconds:1757718521 nanos:257385675}" Sep 12 23:08:41.258012 containerd[1553]: time="2025-09-12T23:08:41.257966664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\" id:\"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\" pid:3345 exited_at:{seconds:1757718521 nanos:257385675}" Sep 12 23:08:41.282217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee-rootfs.mount: Deactivated successfully. Sep 12 23:08:41.480043 containerd[1553]: time="2025-09-12T23:08:41.479899944Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:08:41.488217 containerd[1553]: time="2025-09-12T23:08:41.487381920Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" id:\"bbdd6895dd3093a23c01043d445490a047b955581ca79e858e331d54a0626bf1\" pid:4417 exited_at:{seconds:1757718521 nanos:487058824}" Sep 12 23:08:41.490405 containerd[1553]: time="2025-09-12T23:08:41.490366620Z" level=info msg="StopContainer for \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" with timeout 2 (s)" Sep 12 23:08:41.490896 containerd[1553]: time="2025-09-12T23:08:41.490851575Z" level=info msg="Stop container \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" with signal terminated" Sep 12 23:08:41.493255 containerd[1553]: time="2025-09-12T23:08:41.493201225Z" level=info msg="StopContainer for \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\" returns successfully" Sep 12 23:08:41.501297 systemd-networkd[1465]: lxc_health: Link DOWN Sep 12 23:08:41.501311 systemd-networkd[1465]: lxc_health: Lost carrier Sep 12 23:08:41.504623 containerd[1553]: time="2025-09-12T23:08:41.502320263Z" level=info msg="StopPodSandbox for \"7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe\"" Sep 12 23:08:41.504623 containerd[1553]: time="2025-09-12T23:08:41.502446233Z" level=info msg="Container to stop \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:08:41.513512 systemd[1]: cri-containerd-7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe.scope: Deactivated successfully. Sep 12 23:08:41.521679 systemd[1]: cri-containerd-5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3.scope: Deactivated successfully. Sep 12 23:08:41.522210 systemd[1]: cri-containerd-5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3.scope: Consumed 7.450s CPU time, 125.1M memory peak, 684K read from disk, 13.3M written to disk. Sep 12 23:08:41.523528 containerd[1553]: time="2025-09-12T23:08:41.523489525Z" level=info msg="received exit event container_id:\"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" id:\"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" pid:3415 exited_at:{seconds:1757718521 nanos:522681525}" Sep 12 23:08:41.523677 containerd[1553]: time="2025-09-12T23:08:41.523504715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe\" id:\"7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe\" pid:2967 exit_status:137 exited_at:{seconds:1757718521 nanos:522898279}" Sep 12 23:08:41.557450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3-rootfs.mount: Deactivated successfully. Sep 12 23:08:41.567232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe-rootfs.mount: Deactivated successfully. Sep 12 23:08:41.570251 containerd[1553]: time="2025-09-12T23:08:41.570073397Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" id:\"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" pid:3415 exited_at:{seconds:1757718521 nanos:522681525}" Sep 12 23:08:41.570511 containerd[1553]: time="2025-09-12T23:08:41.570483138Z" level=info msg="shim disconnected" id=7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe namespace=k8s.io Sep 12 23:08:41.570635 containerd[1553]: time="2025-09-12T23:08:41.570509369Z" level=warning msg="cleaning up after shim disconnected" id=7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe namespace=k8s.io Sep 12 23:08:41.570707 containerd[1553]: time="2025-09-12T23:08:41.570522263Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:08:41.574015 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe-shm.mount: Deactivated successfully. Sep 12 23:08:41.577725 containerd[1553]: time="2025-09-12T23:08:41.576752803Z" level=info msg="StopContainer for \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" returns successfully" Sep 12 23:08:41.580128 containerd[1553]: time="2025-09-12T23:08:41.580098101Z" level=info msg="StopPodSandbox for \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\"" Sep 12 23:08:41.580370 containerd[1553]: time="2025-09-12T23:08:41.580351865Z" level=info msg="Container to stop \"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:08:41.580630 containerd[1553]: time="2025-09-12T23:08:41.580422640Z" level=info msg="Container to stop \"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:08:41.580630 containerd[1553]: time="2025-09-12T23:08:41.580436427Z" level=info msg="Container to stop \"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:08:41.580630 containerd[1553]: time="2025-09-12T23:08:41.580445113Z" level=info msg="Container to stop \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:08:41.580630 containerd[1553]: time="2025-09-12T23:08:41.580453880Z" level=info msg="Container to stop \"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:08:41.595087 systemd[1]: cri-containerd-8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067.scope: Deactivated successfully. Sep 12 23:08:41.595660 systemd[1]: cri-containerd-8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067.scope: Consumed 23ms CPU time, 4.5M memory peak, 1.2M read from disk. Sep 12 23:08:41.598236 containerd[1553]: time="2025-09-12T23:08:41.597718267Z" level=info msg="TearDown network for sandbox \"7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe\" successfully" Sep 12 23:08:41.598236 containerd[1553]: time="2025-09-12T23:08:41.597765458Z" level=info msg="StopPodSandbox for \"7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe\" returns successfully" Sep 12 23:08:41.599175 containerd[1553]: time="2025-09-12T23:08:41.599153243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" id:\"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" pid:2936 exit_status:137 exited_at:{seconds:1757718521 nanos:598089235}" Sep 12 23:08:41.609595 containerd[1553]: time="2025-09-12T23:08:41.609493660Z" level=info msg="received exit event sandbox_id:\"7dd81778a401d64c672a4689bdc11fbcb211bb0a457086c4c602bf4519e46abe\" exit_status:137 exited_at:{seconds:1757718521 nanos:522898279}" Sep 12 23:08:41.633689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067-rootfs.mount: Deactivated successfully. Sep 12 23:08:41.640634 containerd[1553]: time="2025-09-12T23:08:41.640589771Z" level=info msg="shim disconnected" id=8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067 namespace=k8s.io Sep 12 23:08:41.640634 containerd[1553]: time="2025-09-12T23:08:41.640625710Z" level=warning msg="cleaning up after shim disconnected" id=8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067 namespace=k8s.io Sep 12 23:08:41.640776 containerd[1553]: time="2025-09-12T23:08:41.640637642Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:08:41.654231 containerd[1553]: time="2025-09-12T23:08:41.654180434Z" level=info msg="received exit event sandbox_id:\"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" exit_status:137 exited_at:{seconds:1757718521 nanos:598089235}" Sep 12 23:08:41.654496 containerd[1553]: time="2025-09-12T23:08:41.654458204Z" level=info msg="TearDown network for sandbox \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" successfully" Sep 12 23:08:41.654496 containerd[1553]: time="2025-09-12T23:08:41.654490976Z" level=info msg="StopPodSandbox for \"8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067\" returns successfully" Sep 12 23:08:41.689595 kubelet[2773]: I0912 23:08:41.689507 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-lib-modules\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.689595 kubelet[2773]: I0912 23:08:41.689583 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-hostproc\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.689595 kubelet[2773]: I0912 23:08:41.689607 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-clustermesh-secrets\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.689595 kubelet[2773]: I0912 23:08:41.689624 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e82e2d13-8fbc-4a54-b626-ea2bf0511849-cilium-config-path\") pod \"e82e2d13-8fbc-4a54-b626-ea2bf0511849\" (UID: \"e82e2d13-8fbc-4a54-b626-ea2bf0511849\") " Sep 12 23:08:41.690353 kubelet[2773]: I0912 23:08:41.689647 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljdq7\" (UniqueName: \"kubernetes.io/projected/e82e2d13-8fbc-4a54-b626-ea2bf0511849-kube-api-access-ljdq7\") pod \"e82e2d13-8fbc-4a54-b626-ea2bf0511849\" (UID: \"e82e2d13-8fbc-4a54-b626-ea2bf0511849\") " Sep 12 23:08:41.690353 kubelet[2773]: I0912 23:08:41.689668 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-hubble-tls\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.690353 kubelet[2773]: I0912 23:08:41.689685 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cni-path\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.690353 kubelet[2773]: I0912 23:08:41.689710 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhdgk\" (UniqueName: \"kubernetes.io/projected/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-kube-api-access-hhdgk\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.690353 kubelet[2773]: I0912 23:08:41.689727 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-etc-cni-netd\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.690353 kubelet[2773]: I0912 23:08:41.689746 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-host-proc-sys-net\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.690642 kubelet[2773]: I0912 23:08:41.689717 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:08:41.690642 kubelet[2773]: I0912 23:08:41.689765 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-xtables-lock\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.690642 kubelet[2773]: I0912 23:08:41.689785 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cilium-run\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.690642 kubelet[2773]: I0912 23:08:41.689805 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cilium-config-path\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.690642 kubelet[2773]: I0912 23:08:41.689824 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-bpf-maps\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.690642 kubelet[2773]: I0912 23:08:41.689846 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-host-proc-sys-kernel\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.692117 kubelet[2773]: I0912 23:08:41.689863 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cilium-cgroup\") pod \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\" (UID: \"58cd7ad8-76b6-40d6-91d3-38f73e72e0bf\") " Sep 12 23:08:41.692117 kubelet[2773]: I0912 23:08:41.689903 2773 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.692117 kubelet[2773]: I0912 23:08:41.689947 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:08:41.692117 kubelet[2773]: I0912 23:08:41.689989 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:08:41.692117 kubelet[2773]: I0912 23:08:41.690009 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:08:41.692330 kubelet[2773]: I0912 23:08:41.690026 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:08:41.692330 kubelet[2773]: I0912 23:08:41.690048 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:08:41.692330 kubelet[2773]: I0912 23:08:41.690464 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-hostproc" (OuterVolumeSpecName: "hostproc") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:08:41.692330 kubelet[2773]: I0912 23:08:41.690506 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:08:41.692330 kubelet[2773]: I0912 23:08:41.690579 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:08:41.693530 kubelet[2773]: I0912 23:08:41.693441 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 23:08:41.693675 kubelet[2773]: I0912 23:08:41.693637 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cni-path" (OuterVolumeSpecName: "cni-path") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:08:41.695242 kubelet[2773]: I0912 23:08:41.695151 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-kube-api-access-hhdgk" (OuterVolumeSpecName: "kube-api-access-hhdgk") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "kube-api-access-hhdgk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:08:41.696671 kubelet[2773]: I0912 23:08:41.696586 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 23:08:41.696920 kubelet[2773]: I0912 23:08:41.696887 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e82e2d13-8fbc-4a54-b626-ea2bf0511849-kube-api-access-ljdq7" (OuterVolumeSpecName: "kube-api-access-ljdq7") pod "e82e2d13-8fbc-4a54-b626-ea2bf0511849" (UID: "e82e2d13-8fbc-4a54-b626-ea2bf0511849"). InnerVolumeSpecName "kube-api-access-ljdq7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:08:41.697182 kubelet[2773]: I0912 23:08:41.697146 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e82e2d13-8fbc-4a54-b626-ea2bf0511849-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e82e2d13-8fbc-4a54-b626-ea2bf0511849" (UID: "e82e2d13-8fbc-4a54-b626-ea2bf0511849"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 23:08:41.697522 kubelet[2773]: I0912 23:08:41.697462 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" (UID: "58cd7ad8-76b6-40d6-91d3-38f73e72e0bf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:08:41.791195 kubelet[2773]: I0912 23:08:41.790988 2773 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791195 kubelet[2773]: I0912 23:08:41.791057 2773 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791195 kubelet[2773]: I0912 23:08:41.791073 2773 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e82e2d13-8fbc-4a54-b626-ea2bf0511849-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791195 kubelet[2773]: I0912 23:08:41.791090 2773 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ljdq7\" (UniqueName: \"kubernetes.io/projected/e82e2d13-8fbc-4a54-b626-ea2bf0511849-kube-api-access-ljdq7\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791195 kubelet[2773]: I0912 23:08:41.791105 2773 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791195 kubelet[2773]: I0912 23:08:41.791118 2773 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791195 kubelet[2773]: I0912 23:08:41.791129 2773 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hhdgk\" (UniqueName: \"kubernetes.io/projected/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-kube-api-access-hhdgk\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791195 kubelet[2773]: I0912 23:08:41.791140 2773 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791581 kubelet[2773]: I0912 23:08:41.791154 2773 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791581 kubelet[2773]: I0912 23:08:41.791165 2773 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791581 kubelet[2773]: I0912 23:08:41.791176 2773 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791581 kubelet[2773]: I0912 23:08:41.791225 2773 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791581 kubelet[2773]: I0912 23:08:41.791238 2773 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791581 kubelet[2773]: I0912 23:08:41.791250 2773 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.791581 kubelet[2773]: I0912 23:08:41.791261 2773 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 23:08:41.988600 kubelet[2773]: I0912 23:08:41.988563 2773 scope.go:117] "RemoveContainer" containerID="0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee" Sep 12 23:08:41.990159 containerd[1553]: time="2025-09-12T23:08:41.990113818Z" level=info msg="RemoveContainer for \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\"" Sep 12 23:08:41.998856 systemd[1]: Removed slice kubepods-besteffort-pode82e2d13_8fbc_4a54_b626_ea2bf0511849.slice - libcontainer container kubepods-besteffort-pode82e2d13_8fbc_4a54_b626_ea2bf0511849.slice. Sep 12 23:08:42.002247 systemd[1]: Removed slice kubepods-burstable-pod58cd7ad8_76b6_40d6_91d3_38f73e72e0bf.slice - libcontainer container kubepods-burstable-pod58cd7ad8_76b6_40d6_91d3_38f73e72e0bf.slice. Sep 12 23:08:42.002364 systemd[1]: kubepods-burstable-pod58cd7ad8_76b6_40d6_91d3_38f73e72e0bf.slice: Consumed 7.584s CPU time, 125.7M memory peak, 1.9M read from disk, 13.3M written to disk. Sep 12 23:08:42.007504 containerd[1553]: time="2025-09-12T23:08:42.007467389Z" level=info msg="RemoveContainer for \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\" returns successfully" Sep 12 23:08:42.007841 kubelet[2773]: I0912 23:08:42.007797 2773 scope.go:117] "RemoveContainer" containerID="0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee" Sep 12 23:08:42.008084 containerd[1553]: time="2025-09-12T23:08:42.008033759Z" level=error msg="ContainerStatus for \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\": not found" Sep 12 23:08:42.008242 kubelet[2773]: E0912 23:08:42.008192 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\": not found" containerID="0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee" Sep 12 23:08:42.008470 kubelet[2773]: I0912 23:08:42.008230 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee"} err="failed to get container status \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\": rpc error: code = NotFound desc = an error occurred when try to find container \"0cbb45ab119298b5700dc106af310726d30899427acd485fe310cca72e0d7dee\": not found" Sep 12 23:08:42.008470 kubelet[2773]: I0912 23:08:42.008316 2773 scope.go:117] "RemoveContainer" containerID="5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3" Sep 12 23:08:42.009991 containerd[1553]: time="2025-09-12T23:08:42.009946216Z" level=info msg="RemoveContainer for \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\"" Sep 12 23:08:42.015240 containerd[1553]: time="2025-09-12T23:08:42.015197731Z" level=info msg="RemoveContainer for \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" returns successfully" Sep 12 23:08:42.015429 kubelet[2773]: I0912 23:08:42.015394 2773 scope.go:117] "RemoveContainer" containerID="5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce" Sep 12 23:08:42.016741 containerd[1553]: time="2025-09-12T23:08:42.016716467Z" level=info msg="RemoveContainer for \"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\"" Sep 12 23:08:42.021332 containerd[1553]: time="2025-09-12T23:08:42.021267776Z" level=info msg="RemoveContainer for \"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\" returns successfully" Sep 12 23:08:42.021595 kubelet[2773]: I0912 23:08:42.021490 2773 scope.go:117] "RemoveContainer" containerID="9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6" Sep 12 23:08:42.023970 containerd[1553]: time="2025-09-12T23:08:42.023948339Z" level=info msg="RemoveContainer for \"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\"" Sep 12 23:08:42.028175 containerd[1553]: time="2025-09-12T23:08:42.028152336Z" level=info msg="RemoveContainer for \"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\" returns successfully" Sep 12 23:08:42.028291 kubelet[2773]: I0912 23:08:42.028275 2773 scope.go:117] "RemoveContainer" containerID="5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb" Sep 12 23:08:42.029448 containerd[1553]: time="2025-09-12T23:08:42.029427156Z" level=info msg="RemoveContainer for \"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\"" Sep 12 23:08:42.033097 containerd[1553]: time="2025-09-12T23:08:42.033076796Z" level=info msg="RemoveContainer for \"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\" returns successfully" Sep 12 23:08:42.033220 kubelet[2773]: I0912 23:08:42.033183 2773 scope.go:117] "RemoveContainer" containerID="41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee" Sep 12 23:08:42.034301 containerd[1553]: time="2025-09-12T23:08:42.034277276Z" level=info msg="RemoveContainer for \"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\"" Sep 12 23:08:42.037964 containerd[1553]: time="2025-09-12T23:08:42.037927076Z" level=info msg="RemoveContainer for \"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\" returns successfully" Sep 12 23:08:42.038158 kubelet[2773]: I0912 23:08:42.038091 2773 scope.go:117] "RemoveContainer" containerID="5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3" Sep 12 23:08:42.038292 containerd[1553]: time="2025-09-12T23:08:42.038258398Z" level=error msg="ContainerStatus for \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\": not found" Sep 12 23:08:42.038453 kubelet[2773]: E0912 23:08:42.038413 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\": not found" containerID="5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3" Sep 12 23:08:42.038489 kubelet[2773]: I0912 23:08:42.038457 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3"} err="failed to get container status \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f469d57061d5a15aa9f263c4da025b82ec9b0a90dbc3393d84287d8b8b49df3\": not found" Sep 12 23:08:42.038520 kubelet[2773]: I0912 23:08:42.038488 2773 scope.go:117] "RemoveContainer" containerID="5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce" Sep 12 23:08:42.038723 containerd[1553]: time="2025-09-12T23:08:42.038683108Z" level=error msg="ContainerStatus for \"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\": not found" Sep 12 23:08:42.038801 kubelet[2773]: E0912 23:08:42.038778 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\": not found" containerID="5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce" Sep 12 23:08:42.038932 kubelet[2773]: I0912 23:08:42.038811 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce"} err="failed to get container status \"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e1892bd9c559665055baee8aff6b455daccf2c51795410442335977b095b2ce\": not found" Sep 12 23:08:42.038932 kubelet[2773]: I0912 23:08:42.038826 2773 scope.go:117] "RemoveContainer" containerID="9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6" Sep 12 23:08:42.038999 containerd[1553]: time="2025-09-12T23:08:42.038962561Z" level=error msg="ContainerStatus for \"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\": not found" Sep 12 23:08:42.039092 kubelet[2773]: E0912 23:08:42.039070 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\": not found" containerID="9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6" Sep 12 23:08:42.039130 kubelet[2773]: I0912 23:08:42.039098 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6"} err="failed to get container status \"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\": rpc error: code = NotFound desc = an error occurred when try to find container \"9688977e80695963e5c39705128c8327375fd649e39b299ef2cac155bb960da6\": not found" Sep 12 23:08:42.039130 kubelet[2773]: I0912 23:08:42.039119 2773 scope.go:117] "RemoveContainer" containerID="5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb" Sep 12 23:08:42.039276 containerd[1553]: time="2025-09-12T23:08:42.039252424Z" level=error msg="ContainerStatus for \"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\": not found" Sep 12 23:08:42.039375 kubelet[2773]: E0912 23:08:42.039345 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\": not found" containerID="5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb" Sep 12 23:08:42.039426 kubelet[2773]: I0912 23:08:42.039377 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb"} err="failed to get container status \"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f754033572f796a20ba6bfbda68f21aa3218050d8c49eb5f76d35318f4a7ffb\": not found" Sep 12 23:08:42.039426 kubelet[2773]: I0912 23:08:42.039391 2773 scope.go:117] "RemoveContainer" containerID="41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee" Sep 12 23:08:42.039581 containerd[1553]: time="2025-09-12T23:08:42.039521056Z" level=error msg="ContainerStatus for \"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\": not found" Sep 12 23:08:42.039694 kubelet[2773]: E0912 23:08:42.039635 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\": not found" containerID="41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee" Sep 12 23:08:42.039694 kubelet[2773]: I0912 23:08:42.039656 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee"} err="failed to get container status \"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"41777095a8f953aa4bda15abc4f09a5f98976785fcef78c3acb2a30c42b809ee\": not found" Sep 12 23:08:42.282572 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b09b8d5c681b63b137b209e51269e8e4386ee3d67b574bfa3c684b922f14067-shm.mount: Deactivated successfully. Sep 12 23:08:42.282706 systemd[1]: var-lib-kubelet-pods-e82e2d13\x2d8fbc\x2d4a54\x2db626\x2dea2bf0511849-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dljdq7.mount: Deactivated successfully. Sep 12 23:08:42.282790 systemd[1]: var-lib-kubelet-pods-58cd7ad8\x2d76b6\x2d40d6\x2d91d3\x2d38f73e72e0bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhhdgk.mount: Deactivated successfully. Sep 12 23:08:42.282872 systemd[1]: var-lib-kubelet-pods-58cd7ad8\x2d76b6\x2d40d6\x2d91d3\x2d38f73e72e0bf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 23:08:42.282947 systemd[1]: var-lib-kubelet-pods-58cd7ad8\x2d76b6\x2d40d6\x2d91d3\x2d38f73e72e0bf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 23:08:42.578241 kubelet[2773]: I0912 23:08:42.578092 2773 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" path="/var/lib/kubelet/pods/58cd7ad8-76b6-40d6-91d3-38f73e72e0bf/volumes" Sep 12 23:08:42.579234 kubelet[2773]: I0912 23:08:42.579187 2773 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e82e2d13-8fbc-4a54-b626-ea2bf0511849" path="/var/lib/kubelet/pods/e82e2d13-8fbc-4a54-b626-ea2bf0511849/volumes" Sep 12 23:08:42.921858 sshd[4388]: Connection closed by 10.0.0.1 port 33502 Sep 12 23:08:42.922395 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:42.933336 systemd[1]: sshd@26-10.0.0.139:22-10.0.0.1:33502.service: Deactivated successfully. Sep 12 23:08:42.935807 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 23:08:42.936100 systemd[1]: session-27.scope: Consumed 1.089s CPU time, 27.1M memory peak. Sep 12 23:08:42.936947 systemd-logind[1543]: Session 27 logged out. Waiting for processes to exit. Sep 12 23:08:42.941049 systemd[1]: Started sshd@27-10.0.0.139:22-10.0.0.1:56194.service - OpenSSH per-connection server daemon (10.0.0.1:56194). Sep 12 23:08:42.941935 systemd-logind[1543]: Removed session 27. Sep 12 23:08:43.004502 sshd[4546]: Accepted publickey for core from 10.0.0.1 port 56194 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:43.006751 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:43.013234 systemd-logind[1543]: New session 28 of user core. Sep 12 23:08:43.028945 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 23:08:43.465234 sshd[4551]: Connection closed by 10.0.0.1 port 56194 Sep 12 23:08:43.466123 sshd-session[4546]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:43.478942 systemd[1]: sshd@27-10.0.0.139:22-10.0.0.1:56194.service: Deactivated successfully. Sep 12 23:08:43.485836 kubelet[2773]: I0912 23:08:43.485753 2773 memory_manager.go:355] "RemoveStaleState removing state" podUID="58cd7ad8-76b6-40d6-91d3-38f73e72e0bf" containerName="cilium-agent" Sep 12 23:08:43.485836 kubelet[2773]: I0912 23:08:43.485788 2773 memory_manager.go:355] "RemoveStaleState removing state" podUID="e82e2d13-8fbc-4a54-b626-ea2bf0511849" containerName="cilium-operator" Sep 12 23:08:43.488396 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 23:08:43.492776 systemd-logind[1543]: Session 28 logged out. Waiting for processes to exit. Sep 12 23:08:43.496871 systemd[1]: Started sshd@28-10.0.0.139:22-10.0.0.1:56206.service - OpenSSH per-connection server daemon (10.0.0.1:56206). Sep 12 23:08:43.500978 systemd-logind[1543]: Removed session 28. Sep 12 23:08:43.513920 systemd[1]: Created slice kubepods-burstable-pod914dae82_cf36_4bc6_a272_b0c211d600ea.slice - libcontainer container kubepods-burstable-pod914dae82_cf36_4bc6_a272_b0c211d600ea.slice. Sep 12 23:08:43.574717 kubelet[2773]: E0912 23:08:43.574683 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:43.587853 sshd[4563]: Accepted publickey for core from 10.0.0.1 port 56206 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:43.589599 sshd-session[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:43.594910 systemd-logind[1543]: New session 29 of user core. Sep 12 23:08:43.600723 kubelet[2773]: I0912 23:08:43.600689 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/914dae82-cf36-4bc6-a272-b0c211d600ea-clustermesh-secrets\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.600789 kubelet[2773]: I0912 23:08:43.600732 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/914dae82-cf36-4bc6-a272-b0c211d600ea-cilium-run\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.600789 kubelet[2773]: I0912 23:08:43.600755 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/914dae82-cf36-4bc6-a272-b0c211d600ea-etc-cni-netd\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.600854 kubelet[2773]: I0912 23:08:43.600824 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/914dae82-cf36-4bc6-a272-b0c211d600ea-lib-modules\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.600877 kubelet[2773]: I0912 23:08:43.600847 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/914dae82-cf36-4bc6-a272-b0c211d600ea-xtables-lock\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.600899 kubelet[2773]: I0912 23:08:43.600883 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/914dae82-cf36-4bc6-a272-b0c211d600ea-host-proc-sys-net\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.600949 kubelet[2773]: I0912 23:08:43.600923 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/914dae82-cf36-4bc6-a272-b0c211d600ea-hubble-tls\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.600988 kubelet[2773]: I0912 23:08:43.600964 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gbw8\" (UniqueName: \"kubernetes.io/projected/914dae82-cf36-4bc6-a272-b0c211d600ea-kube-api-access-2gbw8\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.600988 kubelet[2773]: I0912 23:08:43.600983 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/914dae82-cf36-4bc6-a272-b0c211d600ea-cilium-config-path\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.601043 kubelet[2773]: I0912 23:08:43.601001 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/914dae82-cf36-4bc6-a272-b0c211d600ea-cilium-cgroup\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.601043 kubelet[2773]: I0912 23:08:43.601030 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/914dae82-cf36-4bc6-a272-b0c211d600ea-cilium-ipsec-secrets\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.601084 kubelet[2773]: I0912 23:08:43.601061 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/914dae82-cf36-4bc6-a272-b0c211d600ea-cni-path\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.601084 kubelet[2773]: I0912 23:08:43.601080 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/914dae82-cf36-4bc6-a272-b0c211d600ea-host-proc-sys-kernel\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.601138 kubelet[2773]: I0912 23:08:43.601104 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/914dae82-cf36-4bc6-a272-b0c211d600ea-hostproc\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.601138 kubelet[2773]: I0912 23:08:43.601127 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/914dae82-cf36-4bc6-a272-b0c211d600ea-bpf-maps\") pod \"cilium-t7h8j\" (UID: \"914dae82-cf36-4bc6-a272-b0c211d600ea\") " pod="kube-system/cilium-t7h8j" Sep 12 23:08:43.608758 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 23:08:43.645711 kubelet[2773]: E0912 23:08:43.645672 2773 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 23:08:43.661511 sshd[4567]: Connection closed by 10.0.0.1 port 56206 Sep 12 23:08:43.661914 sshd-session[4563]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:43.673160 systemd[1]: sshd@28-10.0.0.139:22-10.0.0.1:56206.service: Deactivated successfully. Sep 12 23:08:43.675103 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 23:08:43.676030 systemd-logind[1543]: Session 29 logged out. Waiting for processes to exit. Sep 12 23:08:43.679605 systemd[1]: Started sshd@29-10.0.0.139:22-10.0.0.1:56216.service - OpenSSH per-connection server daemon (10.0.0.1:56216). Sep 12 23:08:43.680232 systemd-logind[1543]: Removed session 29. Sep 12 23:08:43.751741 sshd[4574]: Accepted publickey for core from 10.0.0.1 port 56216 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:43.753308 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:43.762063 systemd-logind[1543]: New session 30 of user core. Sep 12 23:08:43.774930 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 12 23:08:43.823798 kubelet[2773]: E0912 23:08:43.823731 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:43.824575 containerd[1553]: time="2025-09-12T23:08:43.824434437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t7h8j,Uid:914dae82-cf36-4bc6-a272-b0c211d600ea,Namespace:kube-system,Attempt:0,}" Sep 12 23:08:43.847756 containerd[1553]: time="2025-09-12T23:08:43.847664475Z" level=info msg="connecting to shim 9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d" address="unix:///run/containerd/s/8fa9f16964aebd2c6a53ba11a866b2ca6e8965e71c2375a5a8044e4c480cfeaa" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:08:43.881882 systemd[1]: Started cri-containerd-9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d.scope - libcontainer container 9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d. Sep 12 23:08:43.917616 containerd[1553]: time="2025-09-12T23:08:43.917552416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t7h8j,Uid:914dae82-cf36-4bc6-a272-b0c211d600ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d\"" Sep 12 23:08:43.918788 kubelet[2773]: E0912 23:08:43.918748 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:43.923063 containerd[1553]: time="2025-09-12T23:08:43.923017339Z" level=info msg="CreateContainer within sandbox \"9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 23:08:43.930767 containerd[1553]: time="2025-09-12T23:08:43.930707137Z" level=info msg="Container e5b7489f731f37de834fc2afed0b8532566d31bad493e1d669e46375f8bd71f3: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:08:43.941430 containerd[1553]: time="2025-09-12T23:08:43.941373161Z" level=info msg="CreateContainer within sandbox \"9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e5b7489f731f37de834fc2afed0b8532566d31bad493e1d669e46375f8bd71f3\"" Sep 12 23:08:43.942050 containerd[1553]: time="2025-09-12T23:08:43.942005006Z" level=info msg="StartContainer for \"e5b7489f731f37de834fc2afed0b8532566d31bad493e1d669e46375f8bd71f3\"" Sep 12 23:08:43.943221 containerd[1553]: time="2025-09-12T23:08:43.943179476Z" level=info msg="connecting to shim e5b7489f731f37de834fc2afed0b8532566d31bad493e1d669e46375f8bd71f3" address="unix:///run/containerd/s/8fa9f16964aebd2c6a53ba11a866b2ca6e8965e71c2375a5a8044e4c480cfeaa" protocol=ttrpc version=3 Sep 12 23:08:43.969868 systemd[1]: Started cri-containerd-e5b7489f731f37de834fc2afed0b8532566d31bad493e1d669e46375f8bd71f3.scope - libcontainer container e5b7489f731f37de834fc2afed0b8532566d31bad493e1d669e46375f8bd71f3. Sep 12 23:08:44.007071 containerd[1553]: time="2025-09-12T23:08:44.006899224Z" level=info msg="StartContainer for \"e5b7489f731f37de834fc2afed0b8532566d31bad493e1d669e46375f8bd71f3\" returns successfully" Sep 12 23:08:44.017053 systemd[1]: cri-containerd-e5b7489f731f37de834fc2afed0b8532566d31bad493e1d669e46375f8bd71f3.scope: Deactivated successfully. Sep 12 23:08:44.020670 containerd[1553]: time="2025-09-12T23:08:44.020618626Z" level=info msg="received exit event container_id:\"e5b7489f731f37de834fc2afed0b8532566d31bad493e1d669e46375f8bd71f3\" id:\"e5b7489f731f37de834fc2afed0b8532566d31bad493e1d669e46375f8bd71f3\" pid:4643 exited_at:{seconds:1757718524 nanos:20206209}" Sep 12 23:08:44.020936 containerd[1553]: time="2025-09-12T23:08:44.020871289Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5b7489f731f37de834fc2afed0b8532566d31bad493e1d669e46375f8bd71f3\" id:\"e5b7489f731f37de834fc2afed0b8532566d31bad493e1d669e46375f8bd71f3\" pid:4643 exited_at:{seconds:1757718524 nanos:20206209}" Sep 12 23:08:45.010224 kubelet[2773]: E0912 23:08:45.010175 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:45.012806 containerd[1553]: time="2025-09-12T23:08:45.012733623Z" level=info msg="CreateContainer within sandbox \"9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 23:08:45.051575 containerd[1553]: time="2025-09-12T23:08:45.051285028Z" level=info msg="Container 94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:08:45.056370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279584897.mount: Deactivated successfully. Sep 12 23:08:45.063124 containerd[1553]: time="2025-09-12T23:08:45.063049013Z" level=info msg="CreateContainer within sandbox \"9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea\"" Sep 12 23:08:45.063904 containerd[1553]: time="2025-09-12T23:08:45.063873967Z" level=info msg="StartContainer for \"94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea\"" Sep 12 23:08:45.064972 containerd[1553]: time="2025-09-12T23:08:45.064944690Z" level=info msg="connecting to shim 94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea" address="unix:///run/containerd/s/8fa9f16964aebd2c6a53ba11a866b2ca6e8965e71c2375a5a8044e4c480cfeaa" protocol=ttrpc version=3 Sep 12 23:08:45.094705 systemd[1]: Started cri-containerd-94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea.scope - libcontainer container 94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea. Sep 12 23:08:45.132352 containerd[1553]: time="2025-09-12T23:08:45.132294461Z" level=info msg="StartContainer for \"94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea\" returns successfully" Sep 12 23:08:45.138587 systemd[1]: cri-containerd-94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea.scope: Deactivated successfully. Sep 12 23:08:45.138948 containerd[1553]: time="2025-09-12T23:08:45.138894744Z" level=info msg="received exit event container_id:\"94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea\" id:\"94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea\" pid:4688 exited_at:{seconds:1757718525 nanos:138706355}" Sep 12 23:08:45.139604 containerd[1553]: time="2025-09-12T23:08:45.139509597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea\" id:\"94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea\" pid:4688 exited_at:{seconds:1757718525 nanos:138706355}" Sep 12 23:08:45.161877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94dc624ba1c9878dc6aa0708b7dd2a1347f77e4d392d36395f653acb84c464ea-rootfs.mount: Deactivated successfully. Sep 12 23:08:46.015860 kubelet[2773]: E0912 23:08:46.015801 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:46.017424 containerd[1553]: time="2025-09-12T23:08:46.017393476Z" level=info msg="CreateContainer within sandbox \"9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 23:08:46.395020 containerd[1553]: time="2025-09-12T23:08:46.394877332Z" level=info msg="Container 32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:08:46.573428 containerd[1553]: time="2025-09-12T23:08:46.573347531Z" level=info msg="CreateContainer within sandbox \"9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3\"" Sep 12 23:08:46.574799 containerd[1553]: time="2025-09-12T23:08:46.574460434Z" level=info msg="StartContainer for \"32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3\"" Sep 12 23:08:46.577056 containerd[1553]: time="2025-09-12T23:08:46.576994720Z" level=info msg="connecting to shim 32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3" address="unix:///run/containerd/s/8fa9f16964aebd2c6a53ba11a866b2ca6e8965e71c2375a5a8044e4c480cfeaa" protocol=ttrpc version=3 Sep 12 23:08:46.603899 systemd[1]: Started cri-containerd-32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3.scope - libcontainer container 32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3. Sep 12 23:08:46.757951 containerd[1553]: time="2025-09-12T23:08:46.757888593Z" level=info msg="StartContainer for \"32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3\" returns successfully" Sep 12 23:08:46.792262 systemd[1]: cri-containerd-32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3.scope: Deactivated successfully. Sep 12 23:08:46.794494 containerd[1553]: time="2025-09-12T23:08:46.794343799Z" level=info msg="received exit event container_id:\"32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3\" id:\"32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3\" pid:4733 exited_at:{seconds:1757718526 nanos:793955808}" Sep 12 23:08:46.794494 containerd[1553]: time="2025-09-12T23:08:46.794441515Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3\" id:\"32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3\" pid:4733 exited_at:{seconds:1757718526 nanos:793955808}" Sep 12 23:08:46.829707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32a4064b5b0cd2a1cfb7d14cc8688cfe39510bbfdf0f738830e960772dd426d3-rootfs.mount: Deactivated successfully. Sep 12 23:08:47.022133 kubelet[2773]: E0912 23:08:47.021804 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:47.024257 containerd[1553]: time="2025-09-12T23:08:47.024207414Z" level=info msg="CreateContainer within sandbox \"9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 23:08:47.040337 containerd[1553]: time="2025-09-12T23:08:47.040230916Z" level=info msg="Container aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:08:47.100600 containerd[1553]: time="2025-09-12T23:08:47.100513408Z" level=info msg="CreateContainer within sandbox \"9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543\"" Sep 12 23:08:47.101694 containerd[1553]: time="2025-09-12T23:08:47.101379310Z" level=info msg="StartContainer for \"aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543\"" Sep 12 23:08:47.102850 containerd[1553]: time="2025-09-12T23:08:47.102809961Z" level=info msg="connecting to shim aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543" address="unix:///run/containerd/s/8fa9f16964aebd2c6a53ba11a866b2ca6e8965e71c2375a5a8044e4c480cfeaa" protocol=ttrpc version=3 Sep 12 23:08:47.128215 systemd[1]: Started cri-containerd-aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543.scope - libcontainer container aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543. Sep 12 23:08:47.195996 systemd[1]: cri-containerd-aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543.scope: Deactivated successfully. Sep 12 23:08:47.197292 containerd[1553]: time="2025-09-12T23:08:47.196832837Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543\" id:\"aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543\" pid:4773 exited_at:{seconds:1757718527 nanos:196262349}" Sep 12 23:08:47.197292 containerd[1553]: time="2025-09-12T23:08:47.197070381Z" level=info msg="received exit event container_id:\"aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543\" id:\"aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543\" pid:4773 exited_at:{seconds:1757718527 nanos:196262349}" Sep 12 23:08:47.199222 containerd[1553]: time="2025-09-12T23:08:47.199170259Z" level=info msg="StartContainer for \"aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543\" returns successfully" Sep 12 23:08:47.596107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aca2fb7e3c5c73587cf7b616ce0b4eea7c0ebae2389fe2c746b7c56d77985543-rootfs.mount: Deactivated successfully. Sep 12 23:08:48.029507 kubelet[2773]: E0912 23:08:48.029439 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:48.035267 containerd[1553]: time="2025-09-12T23:08:48.035183774Z" level=info msg="CreateContainer within sandbox \"9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 23:08:48.102643 containerd[1553]: time="2025-09-12T23:08:48.102586755Z" level=info msg="Container cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:08:48.111752 containerd[1553]: time="2025-09-12T23:08:48.111699950Z" level=info msg="CreateContainer within sandbox \"9daa56ec374a329b6e6a27a5dfe2e7f7e80030d26a613e1bd8a0b6e74db7a60d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77\"" Sep 12 23:08:48.112448 containerd[1553]: time="2025-09-12T23:08:48.112409274Z" level=info msg="StartContainer for \"cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77\"" Sep 12 23:08:48.113806 containerd[1553]: time="2025-09-12T23:08:48.113769240Z" level=info msg="connecting to shim cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77" address="unix:///run/containerd/s/8fa9f16964aebd2c6a53ba11a866b2ca6e8965e71c2375a5a8044e4c480cfeaa" protocol=ttrpc version=3 Sep 12 23:08:48.137813 systemd[1]: Started cri-containerd-cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77.scope - libcontainer container cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77. Sep 12 23:08:48.177772 containerd[1553]: time="2025-09-12T23:08:48.177709771Z" level=info msg="StartContainer for \"cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77\" returns successfully" Sep 12 23:08:48.266215 containerd[1553]: time="2025-09-12T23:08:48.266162681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77\" id:\"ec5ec5334a9e9390c49cfe50d5a0e51e4c928cb0f4421ce367cc03c400a7be99\" pid:4841 exited_at:{seconds:1757718528 nanos:265791663}" Sep 12 23:08:48.694634 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 12 23:08:49.040765 kubelet[2773]: E0912 23:08:49.040607 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:50.043153 kubelet[2773]: E0912 23:08:50.043030 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:50.598899 containerd[1553]: time="2025-09-12T23:08:50.598841516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77\" id:\"60fccec2cf621f43ec7d92cd612e7858ce6b16bbb5a3888879cda478dc6886ab\" pid:5003 exit_status:1 exited_at:{seconds:1757718530 nanos:598230759}" Sep 12 23:08:51.045654 kubelet[2773]: E0912 23:08:51.045485 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:52.093723 systemd-networkd[1465]: lxc_health: Link UP Sep 12 23:08:52.094194 systemd-networkd[1465]: lxc_health: Gained carrier Sep 12 23:08:52.578839 kubelet[2773]: E0912 23:08:52.578787 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:52.743823 containerd[1553]: time="2025-09-12T23:08:52.743736094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77\" id:\"de22818b186ec9950df251cc32bac93ecb22f51b81e9265a12a01b94d7cd542a\" pid:5371 exited_at:{seconds:1757718532 nanos:743305463}" Sep 12 23:08:53.702907 systemd-networkd[1465]: lxc_health: Gained IPv6LL Sep 12 23:08:53.826553 kubelet[2773]: E0912 23:08:53.826151 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:53.850827 kubelet[2773]: I0912 23:08:53.850751 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t7h8j" podStartSLOduration=10.850719967 podStartE2EDuration="10.850719967s" podCreationTimestamp="2025-09-12 23:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:08:49.066396078 +0000 UTC m=+100.618397871" watchObservedRunningTime="2025-09-12 23:08:53.850719967 +0000 UTC m=+105.402721730" Sep 12 23:08:54.053390 kubelet[2773]: E0912 23:08:54.053244 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:54.971719 containerd[1553]: time="2025-09-12T23:08:54.971652282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77\" id:\"36b9018534f9d462f6e716c4e1481130e5286d47b859e255c4d7921cd08943ec\" pid:5411 exited_at:{seconds:1757718534 nanos:970909152}" Sep 12 23:08:55.056098 kubelet[2773]: E0912 23:08:55.056031 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:57.083368 containerd[1553]: time="2025-09-12T23:08:57.083309541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77\" id:\"809ab6aee5a8c99d968fd2fc7c221a6c8f62fdc657373327cf8744f02677f0b8\" pid:5441 exited_at:{seconds:1757718537 nanos:82870783}" Sep 12 23:08:59.186886 containerd[1553]: time="2025-09-12T23:08:59.186822934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbb3bbf8c308c418e40803e87953b41467f775f324a92de97aba4ddf288d5e77\" id:\"33290c8c80a78f859852c9924cd12e44583d7b1931a2867e9aa44efe5e19a274\" pid:5465 exited_at:{seconds:1757718539 nanos:186309544}" Sep 12 23:08:59.193651 sshd[4581]: Connection closed by 10.0.0.1 port 56216 Sep 12 23:08:59.194379 sshd-session[4574]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:59.199448 systemd[1]: sshd@29-10.0.0.139:22-10.0.0.1:56216.service: Deactivated successfully. Sep 12 23:08:59.201548 systemd[1]: session-30.scope: Deactivated successfully. Sep 12 23:08:59.202400 systemd-logind[1543]: Session 30 logged out. Waiting for processes to exit. Sep 12 23:08:59.204063 systemd-logind[1543]: Removed session 30. Sep 12 23:08:59.575369 kubelet[2773]: E0912 23:08:59.575232 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"