Sep 11 00:31:27.930924 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:25:29 -00 2025 Sep 11 00:31:27.930956 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:31:27.930968 kernel: BIOS-provided physical RAM map: Sep 11 00:31:27.930974 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 11 00:31:27.930981 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 11 00:31:27.930988 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 11 00:31:27.930996 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 11 00:31:27.931003 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 11 00:31:27.931012 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 11 00:31:27.931019 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 11 00:31:27.931026 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 11 00:31:27.931035 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 11 00:31:27.931042 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 11 00:31:27.931048 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 11 00:31:27.931057 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 11 00:31:27.931064 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 11 00:31:27.931076 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:31:27.931084 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:31:27.931096 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:31:27.931104 kernel: NX (Execute Disable) protection: active Sep 11 00:31:27.931111 kernel: APIC: Static calls initialized Sep 11 00:31:27.931118 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Sep 11 00:31:27.931137 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Sep 11 00:31:27.931145 kernel: extended physical RAM map: Sep 11 00:31:27.931152 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 11 00:31:27.931159 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 11 00:31:27.931177 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 11 00:31:27.931198 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 11 00:31:27.931206 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Sep 11 00:31:27.931230 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Sep 11 00:31:27.931238 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Sep 11 00:31:27.931245 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Sep 11 00:31:27.931253 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Sep 11 00:31:27.931260 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 11 00:31:27.931267 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 11 00:31:27.931274 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 11 00:31:27.931281 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 11 00:31:27.931297 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 11 00:31:27.931308 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 11 00:31:27.931333 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 11 00:31:27.931363 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 11 00:31:27.931371 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:31:27.931378 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:31:27.931386 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:31:27.931395 kernel: efi: EFI v2.7 by EDK II Sep 11 00:31:27.931403 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 11 00:31:27.931410 kernel: random: crng init done Sep 11 00:31:27.931419 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 11 00:31:27.931426 kernel: secureboot: Secure boot enabled Sep 11 00:31:27.931433 kernel: SMBIOS 2.8 present. Sep 11 00:31:27.931441 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 11 00:31:27.931448 kernel: DMI: Memory slots populated: 1/1 Sep 11 00:31:27.931455 kernel: Hypervisor detected: KVM Sep 11 00:31:27.931463 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 11 00:31:27.931473 kernel: kvm-clock: using sched offset of 7456413561 cycles Sep 11 00:31:27.931481 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 11 00:31:27.931489 kernel: tsc: Detected 2794.750 MHz processor Sep 11 00:31:27.931496 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:31:27.931504 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:31:27.931512 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 11 00:31:27.931519 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 11 00:31:27.931532 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:31:27.931539 kernel: Using GB pages for direct mapping Sep 11 00:31:27.931551 kernel: ACPI: Early table checksum verification disabled Sep 11 00:31:27.931559 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 11 00:31:27.931566 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 11 00:31:27.931574 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:31:27.931582 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:31:27.931589 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 11 00:31:27.931597 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:31:27.931604 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:31:27.931612 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:31:27.931622 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:31:27.931630 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 11 00:31:27.931637 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 11 00:31:27.931645 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 11 00:31:27.931652 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 11 00:31:27.931664 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 11 00:31:27.931671 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 11 00:31:27.931679 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 11 00:31:27.931686 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 11 00:31:27.931696 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 11 00:31:27.931704 kernel: No NUMA configuration found Sep 11 00:31:27.931712 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 11 00:31:27.931719 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 11 00:31:27.931727 kernel: Zone ranges: Sep 11 00:31:27.931734 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:31:27.931742 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 11 00:31:27.931749 kernel: Normal empty Sep 11 00:31:27.931757 kernel: Device empty Sep 11 00:31:27.931764 kernel: Movable zone start for each node Sep 11 00:31:27.931774 kernel: Early memory node ranges Sep 11 00:31:27.931782 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 11 00:31:27.931789 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 11 00:31:27.931797 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 11 00:31:27.931804 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 11 00:31:27.931811 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 11 00:31:27.931819 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 11 00:31:27.931841 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:31:27.931848 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 11 00:31:27.931858 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 11 00:31:27.931866 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 11 00:31:27.931874 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 11 00:31:27.931881 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 11 00:31:27.931889 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 11 00:31:27.931896 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 11 00:31:27.931904 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:31:27.931911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 11 00:31:27.931919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 11 00:31:27.931932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:31:27.931940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 11 00:31:27.931947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 11 00:31:27.931955 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:31:27.931962 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 11 00:31:27.931970 kernel: TSC deadline timer available Sep 11 00:31:27.931977 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:31:27.931985 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:31:27.931995 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:31:27.932009 kernel: CPU topo: Max. threads per core: 1 Sep 11 00:31:27.932017 kernel: CPU topo: Num. cores per package: 4 Sep 11 00:31:27.932025 kernel: CPU topo: Num. threads per package: 4 Sep 11 00:31:27.932035 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 11 00:31:27.932045 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 11 00:31:27.932053 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 11 00:31:27.932061 kernel: kvm-guest: setup PV sched yield Sep 11 00:31:27.932069 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 11 00:31:27.932079 kernel: Booting paravirtualized kernel on KVM Sep 11 00:31:27.932087 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:31:27.932095 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 11 00:31:27.932103 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 11 00:31:27.932111 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 11 00:31:27.932119 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 11 00:31:27.932127 kernel: kvm-guest: PV spinlocks enabled Sep 11 00:31:27.932135 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 11 00:31:27.932144 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:31:27.932155 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:31:27.932163 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:31:27.932171 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:31:27.932178 kernel: Fallback order for Node 0: 0 Sep 11 00:31:27.932186 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 11 00:31:27.932194 kernel: Policy zone: DMA32 Sep 11 00:31:27.932202 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:31:27.932210 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 00:31:27.932220 kernel: ftrace: allocating 40103 entries in 157 pages Sep 11 00:31:27.932228 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:31:27.932235 kernel: Dynamic Preempt: voluntary Sep 11 00:31:27.932243 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:31:27.932252 kernel: rcu: RCU event tracing is enabled. Sep 11 00:31:27.932260 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 00:31:27.932268 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:31:27.932276 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:31:27.932283 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:31:27.932291 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:31:27.932301 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 00:31:27.932309 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:31:27.932317 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:31:27.932329 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:31:27.932337 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 11 00:31:27.932345 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:31:27.932352 kernel: Console: colour dummy device 80x25 Sep 11 00:31:27.932367 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:31:27.932378 kernel: ACPI: Core revision 20240827 Sep 11 00:31:27.932386 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 11 00:31:27.932393 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:31:27.932401 kernel: x2apic enabled Sep 11 00:31:27.932410 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:31:27.932418 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 11 00:31:27.932426 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 11 00:31:27.932433 kernel: kvm-guest: setup PV IPIs Sep 11 00:31:27.932441 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 11 00:31:27.932451 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 11 00:31:27.932459 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 11 00:31:27.932467 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 11 00:31:27.932475 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 11 00:31:27.932482 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 11 00:31:27.932493 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:31:27.932501 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:31:27.932508 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:31:27.932516 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 11 00:31:27.932527 kernel: active return thunk: retbleed_return_thunk Sep 11 00:31:27.932534 kernel: RETBleed: Mitigation: untrained return thunk Sep 11 00:31:27.932542 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 11 00:31:27.932552 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 11 00:31:27.932564 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 11 00:31:27.932580 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 11 00:31:27.932590 kernel: active return thunk: srso_return_thunk Sep 11 00:31:27.932599 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 11 00:31:27.932614 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:31:27.932624 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:31:27.932634 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:31:27.932644 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:31:27.932654 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 11 00:31:27.932665 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:31:27.932675 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:31:27.932683 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:31:27.932730 kernel: landlock: Up and running. Sep 11 00:31:27.932755 kernel: SELinux: Initializing. Sep 11 00:31:27.932770 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:31:27.932786 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:31:27.932802 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 11 00:31:27.932810 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 11 00:31:27.932817 kernel: ... version: 0 Sep 11 00:31:27.932843 kernel: ... bit width: 48 Sep 11 00:31:27.932851 kernel: ... generic registers: 6 Sep 11 00:31:27.932870 kernel: ... value mask: 0000ffffffffffff Sep 11 00:31:27.932881 kernel: ... max period: 00007fffffffffff Sep 11 00:31:27.932889 kernel: ... fixed-purpose events: 0 Sep 11 00:31:27.932897 kernel: ... event mask: 000000000000003f Sep 11 00:31:27.932904 kernel: signal: max sigframe size: 1776 Sep 11 00:31:27.932912 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:31:27.932920 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:31:27.932929 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:31:27.932936 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:31:27.932944 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:31:27.932952 kernel: .... node #0, CPUs: #1 #2 #3 Sep 11 00:31:27.932962 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 00:31:27.932970 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 11 00:31:27.932978 kernel: Memory: 2411268K/2552216K available (14336K kernel code, 2429K rwdata, 9960K rodata, 53832K init, 1088K bss, 135016K reserved, 0K cma-reserved) Sep 11 00:31:27.932986 kernel: devtmpfs: initialized Sep 11 00:31:27.932994 kernel: x86/mm: Memory block size: 128MB Sep 11 00:31:27.933002 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 11 00:31:27.933010 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 11 00:31:27.933018 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:31:27.933028 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 00:31:27.933037 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:31:27.933047 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:31:27.933058 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:31:27.933068 kernel: audit: type=2000 audit(1757550684.604:1): state=initialized audit_enabled=0 res=1 Sep 11 00:31:27.933078 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:31:27.933088 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:31:27.933098 kernel: cpuidle: using governor menu Sep 11 00:31:27.933106 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:31:27.933129 kernel: dca service started, version 1.12.1 Sep 11 00:31:27.933149 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 11 00:31:27.933157 kernel: PCI: Using configuration type 1 for base access Sep 11 00:31:27.933165 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:31:27.933173 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:31:27.933181 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:31:27.933189 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:31:27.933196 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:31:27.933204 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:31:27.933215 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:31:27.933223 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:31:27.933231 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:31:27.933239 kernel: ACPI: Interpreter enabled Sep 11 00:31:27.933247 kernel: ACPI: PM: (supports S0 S5) Sep 11 00:31:27.933254 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:31:27.933262 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:31:27.933270 kernel: PCI: Using E820 reservations for host bridge windows Sep 11 00:31:27.933278 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 11 00:31:27.933291 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 00:31:27.933595 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:31:27.933726 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 11 00:31:27.933869 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 11 00:31:27.933881 kernel: PCI host bridge to bus 0000:00 Sep 11 00:31:27.934017 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 11 00:31:27.934130 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 11 00:31:27.934270 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 11 00:31:27.934417 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 11 00:31:27.934544 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 11 00:31:27.934702 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:31:27.934843 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 00:31:27.935009 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:31:27.935174 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 11 00:31:27.935324 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 11 00:31:27.935477 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 11 00:31:27.935635 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 11 00:31:27.935806 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 11 00:31:27.935987 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 00:31:27.936120 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 11 00:31:27.936249 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 11 00:31:27.936381 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 11 00:31:27.936524 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 11 00:31:27.936648 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 11 00:31:27.936781 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 11 00:31:27.936930 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 11 00:31:27.937069 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 11 00:31:27.937198 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 11 00:31:27.937318 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 11 00:31:27.937462 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 11 00:31:27.937618 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 11 00:31:27.937803 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 11 00:31:27.937973 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 11 00:31:27.938122 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 11 00:31:27.938263 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 11 00:31:27.938404 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 11 00:31:27.938579 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 11 00:31:27.938735 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 11 00:31:27.938754 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 11 00:31:27.938765 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 11 00:31:27.938774 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 11 00:31:27.938792 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 11 00:31:27.938806 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 11 00:31:27.938817 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 11 00:31:27.938859 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 11 00:31:27.938882 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 11 00:31:27.938893 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 11 00:31:27.938904 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 11 00:31:27.938916 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 11 00:31:27.938927 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 11 00:31:27.938944 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 11 00:31:27.938954 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 11 00:31:27.938964 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 11 00:31:27.938974 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 11 00:31:27.938985 kernel: iommu: Default domain type: Translated Sep 11 00:31:27.938995 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:31:27.939005 kernel: efivars: Registered efivars operations Sep 11 00:31:27.939016 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:31:27.939026 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 11 00:31:27.939040 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 11 00:31:27.939051 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Sep 11 00:31:27.939058 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Sep 11 00:31:27.939066 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 11 00:31:27.939074 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 11 00:31:27.939245 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 11 00:31:27.939411 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 11 00:31:27.939537 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 11 00:31:27.939553 kernel: vgaarb: loaded Sep 11 00:31:27.939561 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 11 00:31:27.939570 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 11 00:31:27.939578 kernel: clocksource: Switched to clocksource kvm-clock Sep 11 00:31:27.939586 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:31:27.939594 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:31:27.939602 kernel: pnp: PnP ACPI init Sep 11 00:31:27.939753 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 11 00:31:27.939770 kernel: pnp: PnP ACPI: found 6 devices Sep 11 00:31:27.939778 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:31:27.939786 kernel: NET: Registered PF_INET protocol family Sep 11 00:31:27.939794 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:31:27.939803 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 00:31:27.939811 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:31:27.939818 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:31:27.939841 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 00:31:27.939849 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 00:31:27.939860 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:31:27.939868 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:31:27.939876 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:31:27.939884 kernel: NET: Registered PF_XDP protocol family Sep 11 00:31:27.940025 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 11 00:31:27.940184 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 11 00:31:27.940310 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 11 00:31:27.940436 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 11 00:31:27.940554 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 11 00:31:27.940664 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 11 00:31:27.940775 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 11 00:31:27.940923 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:31:27.940936 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:31:27.940944 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 11 00:31:27.940952 kernel: Initialise system trusted keyrings Sep 11 00:31:27.940960 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 00:31:27.940968 kernel: Key type asymmetric registered Sep 11 00:31:27.940980 kernel: Asymmetric key parser 'x509' registered Sep 11 00:31:27.941004 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:31:27.941015 kernel: io scheduler mq-deadline registered Sep 11 00:31:27.941024 kernel: io scheduler kyber registered Sep 11 00:31:27.941034 kernel: io scheduler bfq registered Sep 11 00:31:27.941045 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:31:27.941057 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 11 00:31:27.941069 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 11 00:31:27.941080 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 11 00:31:27.941094 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:31:27.941105 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:31:27.941116 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 11 00:31:27.941127 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 11 00:31:27.941138 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 11 00:31:27.941305 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 11 00:31:27.941319 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 11 00:31:27.941453 kernel: rtc_cmos 00:04: registered as rtc0 Sep 11 00:31:27.941574 kernel: rtc_cmos 00:04: setting system clock to 2025-09-11T00:31:27 UTC (1757550687) Sep 11 00:31:27.941716 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 11 00:31:27.941728 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 11 00:31:27.941736 kernel: efifb: probing for efifb Sep 11 00:31:27.941745 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 11 00:31:27.941753 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 11 00:31:27.941762 kernel: efifb: scrolling: redraw Sep 11 00:31:27.941770 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 11 00:31:27.941778 kernel: Console: switching to colour frame buffer device 160x50 Sep 11 00:31:27.941793 kernel: fb0: EFI VGA frame buffer device Sep 11 00:31:27.941807 kernel: pstore: Using crash dump compression: deflate Sep 11 00:31:27.941822 kernel: pstore: Registered efi_pstore as persistent store backend Sep 11 00:31:27.941854 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:31:27.941865 kernel: Segment Routing with IPv6 Sep 11 00:31:27.941879 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:31:27.941888 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:31:27.941899 kernel: Key type dns_resolver registered Sep 11 00:31:27.941910 kernel: IPI shorthand broadcast: enabled Sep 11 00:31:27.941922 kernel: sched_clock: Marking stable (4055002248, 140207028)->(4272890339, -77681063) Sep 11 00:31:27.941933 kernel: registered taskstats version 1 Sep 11 00:31:27.941944 kernel: Loading compiled-in X.509 certificates Sep 11 00:31:27.941955 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 8138ce5002a1b572fd22b23ac238f29bab3f249f' Sep 11 00:31:27.941966 kernel: Demotion targets for Node 0: null Sep 11 00:31:27.941980 kernel: Key type .fscrypt registered Sep 11 00:31:27.941992 kernel: Key type fscrypt-provisioning registered Sep 11 00:31:27.942003 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:31:27.942014 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:31:27.942025 kernel: ima: No architecture policies found Sep 11 00:31:27.942035 kernel: clk: Disabling unused clocks Sep 11 00:31:27.942047 kernel: Warning: unable to open an initial console. Sep 11 00:31:27.942058 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 11 00:31:27.942069 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:31:27.942084 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:31:27.942095 kernel: Run /init as init process Sep 11 00:31:27.942106 kernel: with arguments: Sep 11 00:31:27.942117 kernel: /init Sep 11 00:31:27.942128 kernel: with environment: Sep 11 00:31:27.942139 kernel: HOME=/ Sep 11 00:31:27.942149 kernel: TERM=linux Sep 11 00:31:27.942160 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:31:27.942177 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:31:27.942195 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:31:27.942208 systemd[1]: Detected virtualization kvm. Sep 11 00:31:27.942219 systemd[1]: Detected architecture x86-64. Sep 11 00:31:27.942230 systemd[1]: Running in initrd. Sep 11 00:31:27.942241 systemd[1]: No hostname configured, using default hostname. Sep 11 00:31:27.942254 systemd[1]: Hostname set to . Sep 11 00:31:27.942265 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:31:27.942280 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:31:27.942292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:31:27.942303 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:31:27.942316 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:31:27.942327 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:31:27.942339 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:31:27.942350 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:31:27.942372 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:31:27.942381 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:31:27.942390 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:31:27.942399 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:31:27.942408 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:31:27.942416 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:31:27.942425 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:31:27.942434 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:31:27.942446 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:31:27.942455 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:31:27.942464 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:31:27.942472 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:31:27.942481 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:31:27.942490 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:31:27.942499 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:31:27.942510 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:31:27.942519 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:31:27.942530 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:31:27.942539 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:31:27.942548 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:31:27.942557 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:31:27.942566 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:31:27.942574 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:31:27.942583 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:31:27.942592 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:31:27.942603 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:31:27.942612 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:31:27.942621 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:31:27.942664 systemd-journald[218]: Collecting audit messages is disabled. Sep 11 00:31:27.942688 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:31:27.942698 systemd-journald[218]: Journal started Sep 11 00:31:27.942722 systemd-journald[218]: Runtime Journal (/run/log/journal/f5b5317cc92d4de7912343713952b373) is 6M, max 48.2M, 42.2M free. Sep 11 00:31:27.933177 systemd-modules-load[221]: Inserted module 'overlay' Sep 11 00:31:27.944904 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:31:27.947864 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:31:27.966924 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:31:27.964152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:31:27.969919 kernel: Bridge firewalling registered Sep 11 00:31:27.969459 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 11 00:31:27.970839 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:31:27.972821 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:31:27.975004 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:31:27.977669 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:31:27.988964 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:31:27.997511 systemd-tmpfiles[244]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:31:27.999083 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:31:28.003540 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:31:28.005810 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:31:28.028969 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:31:28.031428 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:31:28.059923 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:31:28.070444 systemd-resolved[258]: Positive Trust Anchors: Sep 11 00:31:28.070467 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:31:28.070516 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:31:28.073781 systemd-resolved[258]: Defaulting to hostname 'linux'. Sep 11 00:31:28.075358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:31:28.081021 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:31:28.172865 kernel: SCSI subsystem initialized Sep 11 00:31:28.182871 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:31:28.194861 kernel: iscsi: registered transport (tcp) Sep 11 00:31:28.229189 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:31:28.229230 kernel: QLogic iSCSI HBA Driver Sep 11 00:31:28.251411 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:31:28.275228 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:31:28.279404 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:31:28.348542 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:31:28.351802 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:31:28.423881 kernel: raid6: avx2x4 gen() 28488 MB/s Sep 11 00:31:28.440879 kernel: raid6: avx2x2 gen() 21547 MB/s Sep 11 00:31:28.458092 kernel: raid6: avx2x1 gen() 21038 MB/s Sep 11 00:31:28.458184 kernel: raid6: using algorithm avx2x4 gen() 28488 MB/s Sep 11 00:31:28.476147 kernel: raid6: .... xor() 7512 MB/s, rmw enabled Sep 11 00:31:28.476242 kernel: raid6: using avx2x2 recovery algorithm Sep 11 00:31:28.502875 kernel: xor: automatically using best checksumming function avx Sep 11 00:31:28.683872 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:31:28.695928 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:31:28.700233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:31:28.741228 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 11 00:31:28.748519 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:31:28.750103 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:31:28.776243 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Sep 11 00:31:28.809493 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:31:28.814025 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:31:28.941799 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:31:28.946185 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:31:29.010852 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 11 00:31:29.013847 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:31:29.021852 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 11 00:31:29.024938 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 00:31:29.026844 kernel: libata version 3.00 loaded. Sep 11 00:31:29.028295 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:31:29.030079 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:31:29.033389 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:31:29.037635 kernel: ahci 0000:00:1f.2: version 3.0 Sep 11 00:31:29.037965 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 11 00:31:29.041334 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 11 00:31:29.041556 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 11 00:31:29.041744 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 11 00:31:29.042030 kernel: AES CTR mode by8 optimization enabled Sep 11 00:31:29.038000 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:31:29.057425 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 00:31:29.057475 kernel: GPT:9289727 != 19775487 Sep 11 00:31:29.057490 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 00:31:29.057503 kernel: GPT:9289727 != 19775487 Sep 11 00:31:29.058343 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 00:31:29.058368 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:31:29.058940 kernel: scsi host0: ahci Sep 11 00:31:29.063861 kernel: scsi host1: ahci Sep 11 00:31:29.067845 kernel: scsi host2: ahci Sep 11 00:31:29.068042 kernel: scsi host3: ahci Sep 11 00:31:29.068851 kernel: scsi host4: ahci Sep 11 00:31:29.069903 kernel: scsi host5: ahci Sep 11 00:31:29.073492 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 11 00:31:29.073522 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 11 00:31:29.073537 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 11 00:31:29.073551 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 11 00:31:29.074866 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 11 00:31:29.074887 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 11 00:31:29.079925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:31:29.101443 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 00:31:29.124418 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 00:31:29.135144 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:31:29.144147 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 00:31:29.147288 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 00:31:29.150687 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:31:29.185497 disk-uuid[634]: Primary Header is updated. Sep 11 00:31:29.185497 disk-uuid[634]: Secondary Entries is updated. Sep 11 00:31:29.185497 disk-uuid[634]: Secondary Header is updated. Sep 11 00:31:29.189032 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:31:29.193860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:31:29.388899 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 11 00:31:29.389004 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 11 00:31:29.389020 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 11 00:31:29.389858 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 11 00:31:29.390861 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 11 00:31:29.391882 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 11 00:31:29.392998 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:31:29.393018 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 11 00:31:29.394302 kernel: ata3.00: applying bridge limits Sep 11 00:31:29.394405 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:31:29.395098 kernel: ata3.00: configured for UDMA/100 Sep 11 00:31:29.397864 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 11 00:31:29.462945 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 11 00:31:29.463373 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:31:29.484867 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 11 00:31:29.907822 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:31:29.910545 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:31:29.910653 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:31:29.915449 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:31:29.917660 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:31:29.950942 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:31:30.208871 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:31:30.209129 disk-uuid[635]: The operation has completed successfully. Sep 11 00:31:30.239761 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:31:30.239923 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:31:30.276986 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:31:30.302556 sh[663]: Success Sep 11 00:31:30.321707 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:31:30.321750 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:31:30.321772 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:31:30.330898 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 11 00:31:30.367920 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:31:30.372178 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:31:30.385901 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:31:30.393878 kernel: BTRFS: device fsid f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (675) Sep 11 00:31:30.395852 kernel: BTRFS info (device dm-0): first mount of filesystem f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 Sep 11 00:31:30.395876 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:31:30.400849 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:31:30.400877 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:31:30.401998 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:31:30.402520 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:31:30.403736 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:31:30.407616 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:31:30.408407 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:31:30.460860 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Sep 11 00:31:30.463356 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:31:30.463388 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:31:30.466386 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:31:30.466411 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:31:30.472566 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:31:30.472473 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:31:30.476783 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:31:30.547048 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:31:30.581746 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:31:30.624668 ignition[783]: Ignition 2.21.0 Sep 11 00:31:30.624681 ignition[783]: Stage: fetch-offline Sep 11 00:31:30.624711 ignition[783]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:31:30.624720 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:31:30.624841 ignition[783]: parsed url from cmdline: "" Sep 11 00:31:30.624846 ignition[783]: no config URL provided Sep 11 00:31:30.624851 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:31:30.624863 ignition[783]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:31:30.624886 ignition[783]: op(1): [started] loading QEMU firmware config module Sep 11 00:31:30.639751 systemd-networkd[846]: lo: Link UP Sep 11 00:31:30.624891 ignition[783]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 00:31:30.639755 systemd-networkd[846]: lo: Gained carrier Sep 11 00:31:30.642279 systemd-networkd[846]: Enumeration completed Sep 11 00:31:30.642390 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:31:30.642673 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:31:30.642677 systemd-networkd[846]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:31:30.649198 ignition[783]: op(1): [finished] loading QEMU firmware config module Sep 11 00:31:30.643770 systemd-networkd[846]: eth0: Link UP Sep 11 00:31:30.644004 systemd-networkd[846]: eth0: Gained carrier Sep 11 00:31:30.644012 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:31:30.648014 systemd[1]: Reached target network.target - Network. Sep 11 00:31:30.654887 systemd-networkd[846]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:31:30.697324 ignition[783]: parsing config with SHA512: 53ca0b99f967bd047441c45e2ab42a9b4de80924a95ef76c27aa8adea15543e7707b9add01df71648d8cbc2ac790bb99035ac87d3cc507a807f79153de3c43f1 Sep 11 00:31:30.703482 unknown[783]: fetched base config from "system" Sep 11 00:31:30.703651 unknown[783]: fetched user config from "qemu" Sep 11 00:31:30.704055 ignition[783]: fetch-offline: fetch-offline passed Sep 11 00:31:30.704111 ignition[783]: Ignition finished successfully Sep 11 00:31:30.707147 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:31:30.708541 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 00:31:30.709429 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:31:30.754045 ignition[858]: Ignition 2.21.0 Sep 11 00:31:30.754058 ignition[858]: Stage: kargs Sep 11 00:31:30.754198 ignition[858]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:31:30.754208 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:31:30.754980 ignition[858]: kargs: kargs passed Sep 11 00:31:30.755028 ignition[858]: Ignition finished successfully Sep 11 00:31:30.764547 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:31:30.766719 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:31:30.803936 ignition[866]: Ignition 2.21.0 Sep 11 00:31:30.803949 ignition[866]: Stage: disks Sep 11 00:31:30.809418 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:31:30.804104 ignition[866]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:31:30.817367 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:31:30.804115 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:31:30.818796 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:31:30.806178 ignition[866]: disks: disks passed Sep 11 00:31:30.820897 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:31:30.806260 ignition[866]: Ignition finished successfully Sep 11 00:31:30.821908 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:31:30.823736 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:31:30.824806 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:31:30.863386 systemd-fsck[876]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 00:31:31.348765 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:31:31.353589 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:31:31.482855 kernel: EXT4-fs (vda9): mounted filesystem 6a9ce0af-81d0-4628-9791-e47488ed2744 r/w with ordered data mode. Quota mode: none. Sep 11 00:31:31.483458 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:31:31.484226 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:31:31.487758 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:31:31.489533 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:31:31.490778 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 00:31:31.490855 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:31:31.490883 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:31:31.506500 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:31:31.509566 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:31:31.513848 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (884) Sep 11 00:31:31.513876 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:31:31.513891 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:31:31.517176 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:31:31.517204 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:31:31.518634 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:31:31.552607 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:31:31.557987 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:31:31.562356 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:31:31.567230 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:31:31.665009 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:31:31.667794 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:31:31.669656 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:31:31.690562 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:31:31.691926 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:31:31.693181 systemd-networkd[846]: eth0: Gained IPv6LL Sep 11 00:31:31.707077 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:31:31.822523 ignition[999]: INFO : Ignition 2.21.0 Sep 11 00:31:31.822523 ignition[999]: INFO : Stage: mount Sep 11 00:31:31.825727 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:31:31.825727 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:31:31.828305 ignition[999]: INFO : mount: mount passed Sep 11 00:31:31.828305 ignition[999]: INFO : Ignition finished successfully Sep 11 00:31:31.833337 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:31:31.837323 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:31:32.485945 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:31:32.525871 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Sep 11 00:31:32.528148 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:31:32.528213 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:31:32.531862 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:31:32.531949 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:31:32.534389 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:31:32.588366 ignition[1028]: INFO : Ignition 2.21.0 Sep 11 00:31:32.588366 ignition[1028]: INFO : Stage: files Sep 11 00:31:32.590415 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:31:32.590415 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:31:32.590415 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:31:32.595525 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:31:32.595525 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:31:32.601759 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:31:32.603303 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:31:32.603303 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:31:32.602974 unknown[1028]: wrote ssh authorized keys file for user: core Sep 11 00:31:32.607264 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 11 00:31:32.607264 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 11 00:31:32.968849 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:31:34.048606 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 11 00:31:34.048606 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 11 00:31:34.053032 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 11 00:31:34.137070 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 11 00:31:34.369518 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 11 00:31:34.369518 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:31:34.373305 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:31:34.373305 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:31:34.373305 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:31:34.373305 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:31:34.373305 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:31:34.373305 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:31:34.373305 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:31:34.916642 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:31:34.918695 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:31:34.918695 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:31:35.404408 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:31:35.404408 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:31:35.410449 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 11 00:31:35.806312 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 11 00:31:36.278038 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:31:36.278038 ignition[1028]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 11 00:31:36.282430 ignition[1028]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:31:36.284702 ignition[1028]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:31:36.284702 ignition[1028]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 11 00:31:36.288108 ignition[1028]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 11 00:31:36.288108 ignition[1028]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:31:36.288108 ignition[1028]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:31:36.288108 ignition[1028]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 11 00:31:36.288108 ignition[1028]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 00:31:36.304094 ignition[1028]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:31:36.307861 ignition[1028]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:31:36.309424 ignition[1028]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 00:31:36.309424 ignition[1028]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:31:36.309424 ignition[1028]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:31:36.309424 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:31:36.309424 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:31:36.309424 ignition[1028]: INFO : files: files passed Sep 11 00:31:36.309424 ignition[1028]: INFO : Ignition finished successfully Sep 11 00:31:36.311230 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:31:36.318365 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:31:36.332551 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:31:36.336019 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:31:36.336153 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:31:36.342574 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 00:31:36.346901 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:31:36.346901 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:31:36.350016 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:31:36.352947 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:31:36.353227 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:31:36.357589 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:31:36.427602 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:31:36.427736 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:31:36.428920 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:31:36.429203 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:31:36.429561 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:31:36.434673 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:31:36.456363 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:31:36.460053 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:31:36.488345 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:31:36.490591 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:31:36.490752 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:31:36.493151 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:31:36.493300 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:31:36.497585 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:31:36.497724 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:31:36.499554 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:31:36.499883 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:31:36.500202 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:31:36.500525 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:31:36.506918 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:31:36.507226 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:31:36.507551 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:31:36.507883 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:31:36.514519 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:31:36.514797 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:31:36.514935 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:31:36.518129 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:31:36.518496 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:31:36.518779 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:31:36.518897 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:31:36.524232 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:31:36.524343 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:31:36.526547 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:31:36.526658 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:31:36.530498 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:31:36.531617 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:31:36.532889 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:31:36.533615 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:31:36.535750 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:31:36.537362 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:31:36.537449 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:31:36.539174 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:31:36.539269 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:31:36.542369 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:31:36.542485 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:31:36.543222 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:31:36.543328 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:31:36.547734 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:31:36.551326 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:31:36.552486 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:31:36.552657 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:31:36.554181 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:31:36.554292 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:31:36.563195 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:31:36.563307 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:31:36.577500 ignition[1084]: INFO : Ignition 2.21.0 Sep 11 00:31:36.577500 ignition[1084]: INFO : Stage: umount Sep 11 00:31:36.579317 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:31:36.579317 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:31:36.579317 ignition[1084]: INFO : umount: umount passed Sep 11 00:31:36.579317 ignition[1084]: INFO : Ignition finished successfully Sep 11 00:31:36.581048 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:31:36.581192 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:31:36.582682 systemd[1]: Stopped target network.target - Network. Sep 11 00:31:36.584054 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:31:36.584110 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:31:36.586255 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:31:36.586322 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:31:36.588202 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:31:36.588271 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:31:36.590077 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:31:36.590126 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:31:36.592031 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:31:36.593945 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:31:36.596148 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:31:36.596784 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:31:36.596941 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:31:36.598372 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:31:36.598458 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:31:36.602171 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:31:36.602347 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:31:36.608966 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:31:36.609353 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:31:36.609508 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:31:36.614097 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:31:36.614905 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:31:36.615427 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:31:36.615506 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:31:36.618557 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:31:36.619307 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:31:36.619366 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:31:36.619696 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:31:36.619748 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:31:36.626932 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:31:36.627004 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:31:36.627975 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:31:36.628027 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:31:36.631997 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:31:36.634187 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:31:36.634255 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:31:36.644694 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:31:36.650028 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:31:36.652692 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:31:36.652758 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:31:36.654721 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:31:36.654766 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:31:36.656708 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:31:36.656776 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:31:36.659606 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:31:36.659670 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:31:36.662474 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:31:36.662532 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:31:36.666536 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:31:36.667664 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:31:36.667724 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:31:36.673051 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:31:36.673129 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:31:36.676520 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 11 00:31:36.676589 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:31:36.680108 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:31:36.681058 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:31:36.682368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:31:36.682433 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:31:36.687965 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:31:36.688048 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 11 00:31:36.688107 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:31:36.688168 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:31:36.688581 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:31:36.688712 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:31:36.695039 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:31:36.695178 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:31:36.695793 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:31:36.699007 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:31:36.719503 systemd[1]: Switching root. Sep 11 00:31:36.769418 systemd-journald[218]: Journal stopped Sep 11 00:31:38.226481 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Sep 11 00:31:38.226545 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:31:38.226560 kernel: SELinux: policy capability open_perms=1 Sep 11 00:31:38.226582 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:31:38.226594 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:31:38.226615 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:31:38.226627 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:31:38.226638 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:31:38.226655 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:31:38.226666 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:31:38.226678 kernel: audit: type=1403 audit(1757550697.330:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:31:38.226700 systemd[1]: Successfully loaded SELinux policy in 48.317ms. Sep 11 00:31:38.226715 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.316ms. Sep 11 00:31:38.226729 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:31:38.226746 systemd[1]: Detected virtualization kvm. Sep 11 00:31:38.226758 systemd[1]: Detected architecture x86-64. Sep 11 00:31:38.226770 systemd[1]: Detected first boot. Sep 11 00:31:38.226782 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:31:38.226794 zram_generator::config[1130]: No configuration found. Sep 11 00:31:38.226808 kernel: Guest personality initialized and is inactive Sep 11 00:31:38.226819 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 11 00:31:38.226848 kernel: Initialized host personality Sep 11 00:31:38.226870 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:31:38.226882 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:31:38.226895 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:31:38.226907 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:31:38.226920 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:31:38.226932 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:31:38.226944 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:31:38.226957 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:31:38.226969 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:31:38.226986 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:31:38.226998 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:31:38.227011 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:31:38.227023 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:31:38.227035 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:31:38.227047 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:31:38.227060 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:31:38.227072 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:31:38.227084 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:31:38.227109 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:31:38.227122 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:31:38.227134 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:31:38.227147 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:31:38.227159 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:31:38.227171 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:31:38.227183 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:31:38.227202 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:31:38.227214 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:31:38.227226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:31:38.227238 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:31:38.227251 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:31:38.227262 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:31:38.227274 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:31:38.227286 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:31:38.227298 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:31:38.227310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:31:38.227327 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:31:38.227340 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:31:38.227352 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:31:38.227363 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:31:38.227375 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:31:38.227387 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:31:38.227405 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:31:38.227417 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:31:38.227429 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:31:38.227446 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:31:38.227458 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:31:38.227471 systemd[1]: Reached target machines.target - Containers. Sep 11 00:31:38.227484 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:31:38.227496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:31:38.227508 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:31:38.227520 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:31:38.227532 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:31:38.227549 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:31:38.227562 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:31:38.227574 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:31:38.227586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:31:38.227599 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:31:38.227611 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:31:38.227623 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:31:38.227635 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:31:38.227652 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:31:38.227665 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:31:38.227678 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:31:38.227689 kernel: loop: module loaded Sep 11 00:31:38.227701 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:31:38.227712 kernel: ACPI: bus type drm_connector registered Sep 11 00:31:38.227724 kernel: fuse: init (API version 7.41) Sep 11 00:31:38.227736 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:31:38.227748 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:31:38.227765 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:31:38.227777 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:31:38.227789 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:31:38.227801 systemd[1]: Stopped verity-setup.service. Sep 11 00:31:38.227814 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:31:38.227845 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:31:38.227863 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:31:38.227875 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:31:38.227887 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:31:38.227900 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:31:38.227916 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:31:38.227929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:31:38.227941 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:31:38.227953 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:31:38.227988 systemd-journald[1194]: Collecting audit messages is disabled. Sep 11 00:31:38.228014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:31:38.228026 systemd-journald[1194]: Journal started Sep 11 00:31:38.228054 systemd-journald[1194]: Runtime Journal (/run/log/journal/f5b5317cc92d4de7912343713952b373) is 6M, max 48.2M, 42.2M free. Sep 11 00:31:37.876894 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:31:37.900985 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 00:31:37.901500 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:31:38.242273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:31:38.245282 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:31:38.246155 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:31:38.246466 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:31:38.247936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:31:38.248234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:31:38.249757 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:31:38.250106 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:31:38.251705 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:31:38.252033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:31:38.253496 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:31:38.254940 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:31:38.265507 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:31:38.267247 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:31:38.281506 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:31:38.284742 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:31:38.287172 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:31:38.288568 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:31:38.288597 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:31:38.290995 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:31:38.299967 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:31:38.301617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:31:38.303942 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:31:38.307061 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:31:38.308491 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:31:38.323682 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:31:38.325160 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:31:38.327939 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:31:38.330899 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:31:38.333987 systemd-journald[1194]: Time spent on flushing to /var/log/journal/f5b5317cc92d4de7912343713952b373 is 25.472ms for 1044 entries. Sep 11 00:31:38.333987 systemd-journald[1194]: System Journal (/var/log/journal/f5b5317cc92d4de7912343713952b373) is 8M, max 195.6M, 187.6M free. Sep 11 00:31:38.368539 systemd-journald[1194]: Received client request to flush runtime journal. Sep 11 00:31:38.368576 kernel: loop0: detected capacity change from 0 to 224512 Sep 11 00:31:38.334739 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:31:38.344466 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:31:38.348414 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:31:38.350033 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:31:38.352026 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:31:38.353544 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:31:38.360691 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:31:38.364549 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:31:38.378245 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:31:38.388362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:31:38.396844 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:31:38.397347 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 11 00:31:38.397367 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 11 00:31:38.406878 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:31:38.410186 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:31:38.413234 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:31:38.419856 kernel: loop1: detected capacity change from 0 to 146240 Sep 11 00:31:38.453888 kernel: loop2: detected capacity change from 0 to 113872 Sep 11 00:31:38.456206 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:31:38.459328 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:31:38.491656 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 11 00:31:38.492186 kernel: loop3: detected capacity change from 0 to 224512 Sep 11 00:31:38.491679 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 11 00:31:38.497536 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:31:38.502861 kernel: loop4: detected capacity change from 0 to 146240 Sep 11 00:31:38.519957 kernel: loop5: detected capacity change from 0 to 113872 Sep 11 00:31:38.528280 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 00:31:38.528906 (sd-merge)[1275]: Merged extensions into '/usr'. Sep 11 00:31:38.534345 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:31:38.534363 systemd[1]: Reloading... Sep 11 00:31:38.599867 zram_generator::config[1299]: No configuration found. Sep 11 00:31:38.702751 ldconfig[1236]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:31:38.735373 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:31:38.818434 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:31:38.818552 systemd[1]: Reloading finished in 283 ms. Sep 11 00:31:38.849538 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:31:38.851259 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:31:38.868514 systemd[1]: Starting ensure-sysext.service... Sep 11 00:31:38.870647 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:31:38.893986 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:31:38.894030 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:31:38.894364 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:31:38.894619 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:31:38.895613 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:31:38.895919 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Sep 11 00:31:38.895993 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Sep 11 00:31:38.898641 systemd[1]: Reload requested from client PID 1340 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:31:38.898660 systemd[1]: Reloading... Sep 11 00:31:38.900684 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:31:38.900698 systemd-tmpfiles[1341]: Skipping /boot Sep 11 00:31:38.913892 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:31:38.913991 systemd-tmpfiles[1341]: Skipping /boot Sep 11 00:31:38.956877 zram_generator::config[1369]: No configuration found. Sep 11 00:31:39.052958 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:31:39.143042 systemd[1]: Reloading finished in 243 ms. Sep 11 00:31:39.165756 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:31:39.190118 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:31:39.199950 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:31:39.202758 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:31:39.222566 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:31:39.226346 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:31:39.229228 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:31:39.233299 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:31:39.238267 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:31:39.238461 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:31:39.242033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:31:39.245757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:31:39.249226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:31:39.250450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:31:39.250555 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:31:39.255238 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:31:39.256667 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:31:39.258653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:31:39.258947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:31:39.264142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:31:39.264512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:31:39.266650 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:31:39.266969 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:31:39.279041 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:31:39.285838 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:31:39.291286 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:31:39.292015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:31:39.341962 augenrules[1442]: No rules Sep 11 00:31:39.720496 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:31:39.727031 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:31:39.732680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:31:39.736559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:31:39.738001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:31:39.738182 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:31:39.741493 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:31:39.743999 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:31:39.746852 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:31:39.753138 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:31:39.755949 systemd-udevd[1411]: Using default interface naming scheme 'v255'. Sep 11 00:31:39.756551 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:31:39.759332 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:31:39.761665 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:31:39.761994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:31:39.764285 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:31:39.764554 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:31:39.767716 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:31:39.768290 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:31:39.770882 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:31:39.771168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:31:39.773419 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:31:39.783538 systemd[1]: Finished ensure-sysext.service. Sep 11 00:31:39.792332 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:31:39.797955 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:31:39.799313 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:31:39.799398 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:31:39.805180 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:31:39.806518 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:31:39.858984 systemd-resolved[1410]: Positive Trust Anchors: Sep 11 00:31:39.859008 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:31:39.859042 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:31:39.864381 systemd-resolved[1410]: Defaulting to hostname 'linux'. Sep 11 00:31:39.866617 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:31:39.868145 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:31:39.961350 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:31:39.972254 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:31:39.976462 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:31:39.999288 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:31:40.006856 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:31:40.039869 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 11 00:31:40.042074 systemd-networkd[1468]: lo: Link UP Sep 11 00:31:40.042085 systemd-networkd[1468]: lo: Gained carrier Sep 11 00:31:40.044104 systemd-networkd[1468]: Enumeration completed Sep 11 00:31:40.044204 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:31:40.044734 systemd-networkd[1468]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:31:40.044744 systemd-networkd[1468]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:31:40.045402 systemd-networkd[1468]: eth0: Link UP Sep 11 00:31:40.045690 systemd-networkd[1468]: eth0: Gained carrier Sep 11 00:31:40.045705 systemd-networkd[1468]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:31:40.045979 systemd[1]: Reached target network.target - Network. Sep 11 00:31:40.051913 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:31:40.056094 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:31:40.058369 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:31:40.058870 kernel: ACPI: button: Power Button [PWRF] Sep 11 00:31:40.059789 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:31:40.061080 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:31:40.062447 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:31:40.063800 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:31:40.063905 systemd-networkd[1468]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:31:40.065062 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:31:40.066416 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:31:40.066451 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:31:40.066474 systemd-timesyncd[1472]: Network configuration changed, trying to establish connection. Sep 11 00:31:41.005009 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:31:41.005078 systemd-timesyncd[1472]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 00:31:41.005134 systemd-timesyncd[1472]: Initial clock synchronization to Thu 2025-09-11 00:31:41.004971 UTC. Sep 11 00:31:41.006353 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:31:41.007088 systemd-resolved[1410]: Clock change detected. Flushing caches. Sep 11 00:31:41.007932 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:31:41.009188 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:31:41.011325 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:31:41.015691 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:31:41.019456 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:31:41.021180 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:31:41.022661 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:31:41.031257 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 11 00:31:41.031570 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 11 00:31:41.031741 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 11 00:31:41.038430 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:31:41.040805 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:31:41.043434 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:31:41.054376 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:31:41.055633 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:31:41.057197 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:31:41.057239 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:31:41.064593 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:31:41.068571 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:31:41.072857 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:31:41.082824 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:31:41.087335 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:31:41.088688 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:31:41.092032 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:31:41.096211 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:31:41.103511 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:31:41.111452 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:31:41.116794 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:31:41.122318 jq[1527]: false Sep 11 00:31:41.129511 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:31:41.132472 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:31:41.133077 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:31:41.134257 extend-filesystems[1528]: Found /dev/vda6 Sep 11 00:31:41.134438 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:31:41.136909 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:31:41.142409 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:31:41.143809 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing passwd entry cache Sep 11 00:31:41.143830 oslogin_cache_refresh[1529]: Refreshing passwd entry cache Sep 11 00:31:41.160956 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting users, quitting Sep 11 00:31:41.160956 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:31:41.160956 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing group entry cache Sep 11 00:31:41.160149 oslogin_cache_refresh[1529]: Failure getting users, quitting Sep 11 00:31:41.160185 oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:31:41.161320 jq[1542]: true Sep 11 00:31:41.160278 oslogin_cache_refresh[1529]: Refreshing group entry cache Sep 11 00:31:41.171533 extend-filesystems[1528]: Found /dev/vda9 Sep 11 00:31:41.171533 extend-filesystems[1528]: Checking size of /dev/vda9 Sep 11 00:31:41.162560 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:31:41.174104 update_engine[1540]: I20250911 00:31:41.173486 1540 main.cc:92] Flatcar Update Engine starting Sep 11 00:31:41.165088 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:31:41.165435 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:31:41.167976 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:31:41.168234 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:31:41.177414 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting groups, quitting Sep 11 00:31:41.177414 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:31:41.176408 oslogin_cache_refresh[1529]: Failure getting groups, quitting Sep 11 00:31:41.176428 oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:31:41.184129 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:31:41.184442 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:31:41.197834 jq[1553]: true Sep 11 00:31:41.201353 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:31:41.205622 extend-filesystems[1528]: Resized partition /dev/vda9 Sep 11 00:31:41.201697 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:31:41.208132 (ntainerd)[1555]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:31:41.223614 extend-filesystems[1575]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 00:31:41.251271 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 00:31:41.251425 tar[1552]: linux-amd64/LICENSE Sep 11 00:31:41.251425 tar[1552]: linux-amd64/helm Sep 11 00:31:41.259105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:31:41.267706 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:31:41.268058 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:31:41.271459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:31:41.540281 systemd-logind[1539]: Watching system buttons on /dev/input/event2 (Power Button) Sep 11 00:31:41.547431 kernel: kvm_amd: TSC scaling supported Sep 11 00:31:41.547502 kernel: kvm_amd: Nested Virtualization enabled Sep 11 00:31:41.547540 kernel: kvm_amd: Nested Paging enabled Sep 11 00:31:41.547575 kernel: kvm_amd: LBR virtualization supported Sep 11 00:31:41.547618 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 11 00:31:41.547646 kernel: kvm_amd: Virtual GIF supported Sep 11 00:31:41.540359 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:31:41.557457 systemd-logind[1539]: New seat seat0. Sep 11 00:31:41.590517 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 00:31:41.576847 dbus-daemon[1521]: [system] SELinux support is enabled Sep 11 00:31:41.562904 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:31:41.577066 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:31:41.580627 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:31:41.580650 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:31:41.580740 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:31:41.580754 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:31:41.594853 bash[1591]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:31:41.596118 extend-filesystems[1575]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 00:31:41.596118 extend-filesystems[1575]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 00:31:41.596118 extend-filesystems[1575]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 00:31:41.596907 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Sep 11 00:31:41.598695 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:31:41.599323 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:31:41.606587 update_engine[1540]: I20250911 00:31:41.606388 1540 update_check_scheduler.cc:74] Next update check in 2m13s Sep 11 00:31:41.613325 kernel: EDAC MC: Ver: 3.0.0 Sep 11 00:31:41.644761 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:31:41.646519 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:31:41.653003 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:31:41.656888 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:31:41.660700 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:31:41.777267 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:31:41.907336 containerd[1555]: time="2025-09-11T00:31:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:31:41.910769 containerd[1555]: time="2025-09-11T00:31:41.910705457Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:31:41.926876 containerd[1555]: time="2025-09-11T00:31:41.926798632Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.569µs" Sep 11 00:31:41.926876 containerd[1555]: time="2025-09-11T00:31:41.926874093Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:31:41.926971 containerd[1555]: time="2025-09-11T00:31:41.926915511Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:31:41.927496 containerd[1555]: time="2025-09-11T00:31:41.927342211Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:31:41.927496 containerd[1555]: time="2025-09-11T00:31:41.927382717Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:31:41.927496 containerd[1555]: time="2025-09-11T00:31:41.927424796Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:31:41.927596 containerd[1555]: time="2025-09-11T00:31:41.927524904Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:31:41.927596 containerd[1555]: time="2025-09-11T00:31:41.927540022Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:31:41.927955 containerd[1555]: time="2025-09-11T00:31:41.927913883Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:31:41.927955 containerd[1555]: time="2025-09-11T00:31:41.927940713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:31:41.927997 containerd[1555]: time="2025-09-11T00:31:41.927956242Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:31:41.927997 containerd[1555]: time="2025-09-11T00:31:41.927968846Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:31:41.928135 containerd[1555]: time="2025-09-11T00:31:41.928097998Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:31:41.928547 containerd[1555]: time="2025-09-11T00:31:41.928498128Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:31:41.928587 containerd[1555]: time="2025-09-11T00:31:41.928564403Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:31:41.928587 containerd[1555]: time="2025-09-11T00:31:41.928579792Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:31:41.928645 containerd[1555]: time="2025-09-11T00:31:41.928625858Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:31:41.928900 containerd[1555]: time="2025-09-11T00:31:41.928870336Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:31:41.928970 containerd[1555]: time="2025-09-11T00:31:41.928949004Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:31:41.934345 containerd[1555]: time="2025-09-11T00:31:41.934310757Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:31:41.934496 containerd[1555]: time="2025-09-11T00:31:41.934471579Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:31:41.934675 containerd[1555]: time="2025-09-11T00:31:41.934656185Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:31:41.934758 containerd[1555]: time="2025-09-11T00:31:41.934733460Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:31:41.934842 containerd[1555]: time="2025-09-11T00:31:41.934823258Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:31:41.934903 containerd[1555]: time="2025-09-11T00:31:41.934890895Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.934973219Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.934994259Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935009768Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935020077Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935029615Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935042840Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935347741Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935374602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935406221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935427251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935441287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935462306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935475531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935514725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:31:41.935834 containerd[1555]: time="2025-09-11T00:31:41.935545472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:31:41.936283 containerd[1555]: time="2025-09-11T00:31:41.935570960Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:31:41.936283 containerd[1555]: time="2025-09-11T00:31:41.935600696Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:31:41.936283 containerd[1555]: time="2025-09-11T00:31:41.935732773Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:31:41.936283 containerd[1555]: time="2025-09-11T00:31:41.935755817Z" level=info msg="Start snapshots syncer" Sep 11 00:31:41.936617 containerd[1555]: time="2025-09-11T00:31:41.936496335Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:31:41.937106 containerd[1555]: time="2025-09-11T00:31:41.937061344Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937329838Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937422592Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937543238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937588132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937620162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937657412Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937690915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937711463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937722374Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937749555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937780353Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937793277Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937832771Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:31:41.938320 containerd[1555]: time="2025-09-11T00:31:41.937864621Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:31:41.938598 containerd[1555]: time="2025-09-11T00:31:41.937881633Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:31:41.938598 containerd[1555]: time="2025-09-11T00:31:41.937911569Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:31:41.938598 containerd[1555]: time="2025-09-11T00:31:41.937923812Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:31:41.938598 containerd[1555]: time="2025-09-11T00:31:41.937933890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:31:41.938598 containerd[1555]: time="2025-09-11T00:31:41.937972172Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:31:41.938598 containerd[1555]: time="2025-09-11T00:31:41.938024220Z" level=info msg="runtime interface created" Sep 11 00:31:41.938598 containerd[1555]: time="2025-09-11T00:31:41.938044267Z" level=info msg="created NRI interface" Sep 11 00:31:41.938598 containerd[1555]: time="2025-09-11T00:31:41.938060718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:31:41.938598 containerd[1555]: time="2025-09-11T00:31:41.938076668Z" level=info msg="Connect containerd service" Sep 11 00:31:41.938598 containerd[1555]: time="2025-09-11T00:31:41.938109470Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:31:41.939744 containerd[1555]: time="2025-09-11T00:31:41.939720701Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:31:42.081602 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:31:42.085709 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:31:42.115316 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:31:42.235742 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:31:42.237917 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:47728.service - OpenSSH per-connection server daemon (10.0.0.1:47728). Sep 11 00:31:42.264702 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:31:42.266939 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:31:42.273574 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:31:42.316004 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:31:42.322530 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:31:42.325937 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:31:42.327515 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:31:42.365218 tar[1552]: linux-amd64/README.md Sep 11 00:31:42.419786 containerd[1555]: time="2025-09-11T00:31:42.419413795Z" level=info msg="Start subscribing containerd event" Sep 11 00:31:42.419786 containerd[1555]: time="2025-09-11T00:31:42.419569317Z" level=info msg="Start recovering state" Sep 11 00:31:42.419920 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 47728 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:31:42.420154 containerd[1555]: time="2025-09-11T00:31:42.419805710Z" level=info msg="Start event monitor" Sep 11 00:31:42.420154 containerd[1555]: time="2025-09-11T00:31:42.419816079Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:31:42.420154 containerd[1555]: time="2025-09-11T00:31:42.419839794Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:31:42.420154 containerd[1555]: time="2025-09-11T00:31:42.419860463Z" level=info msg="Start streaming server" Sep 11 00:31:42.420154 containerd[1555]: time="2025-09-11T00:31:42.419877635Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:31:42.420154 containerd[1555]: time="2025-09-11T00:31:42.419885349Z" level=info msg="runtime interface starting up..." Sep 11 00:31:42.420154 containerd[1555]: time="2025-09-11T00:31:42.419890158Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:31:42.420154 containerd[1555]: time="2025-09-11T00:31:42.419891541Z" level=info msg="starting plugins..." Sep 11 00:31:42.420154 containerd[1555]: time="2025-09-11T00:31:42.419932909Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:31:42.420154 containerd[1555]: time="2025-09-11T00:31:42.420113577Z" level=info msg="containerd successfully booted in 0.513625s" Sep 11 00:31:42.420227 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:31:42.424463 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:42.429649 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:31:42.434723 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:31:42.436985 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:31:42.445715 systemd-logind[1539]: New session 1 of user core. Sep 11 00:31:42.479046 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:31:42.484139 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:31:42.503831 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:31:42.506388 systemd-logind[1539]: New session c1 of user core. Sep 11 00:31:42.667471 systemd[1654]: Queued start job for default target default.target. Sep 11 00:31:42.691109 systemd[1654]: Created slice app.slice - User Application Slice. Sep 11 00:31:42.691140 systemd[1654]: Reached target paths.target - Paths. Sep 11 00:31:42.691185 systemd[1654]: Reached target timers.target - Timers. Sep 11 00:31:42.693125 systemd[1654]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:31:42.706002 systemd[1654]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:31:42.706127 systemd[1654]: Reached target sockets.target - Sockets. Sep 11 00:31:42.706171 systemd[1654]: Reached target basic.target - Basic System. Sep 11 00:31:42.706211 systemd[1654]: Reached target default.target - Main User Target. Sep 11 00:31:42.706243 systemd[1654]: Startup finished in 193ms. Sep 11 00:31:42.706821 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:31:42.709734 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:31:42.741527 systemd-networkd[1468]: eth0: Gained IPv6LL Sep 11 00:31:42.745337 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:31:42.747102 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:31:42.749932 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 00:31:42.752380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:31:42.754779 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:31:42.779137 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:47734.service - OpenSSH per-connection server daemon (10.0.0.1:47734). Sep 11 00:31:42.797874 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:31:42.807167 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 00:31:42.807600 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 00:31:42.809472 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:31:42.830215 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 47734 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:31:42.833853 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:42.839122 systemd-logind[1539]: New session 2 of user core. Sep 11 00:31:42.846540 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:31:42.905011 sshd[1685]: Connection closed by 10.0.0.1 port 47734 Sep 11 00:31:42.905393 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:42.916119 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:47734.service: Deactivated successfully. Sep 11 00:31:42.918284 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 00:31:42.919162 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. Sep 11 00:31:42.922697 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:47748.service - OpenSSH per-connection server daemon (10.0.0.1:47748). Sep 11 00:31:42.924768 systemd-logind[1539]: Removed session 2. Sep 11 00:31:42.981798 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 47748 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:31:42.983560 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:42.988764 systemd-logind[1539]: New session 3 of user core. Sep 11 00:31:43.091661 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:31:43.173323 sshd[1693]: Connection closed by 10.0.0.1 port 47748 Sep 11 00:31:43.173618 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:43.178240 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:47748.service: Deactivated successfully. Sep 11 00:31:43.180109 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 00:31:43.180900 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. Sep 11 00:31:43.182598 systemd-logind[1539]: Removed session 3. Sep 11 00:31:44.217680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:31:44.219376 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:31:44.220776 systemd[1]: Startup finished in 4.119s (kernel) + 9.695s (initrd) + 6.000s (userspace) = 19.815s. Sep 11 00:31:44.221931 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:31:44.748193 kubelet[1703]: E0911 00:31:44.748083 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:31:44.752444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:31:44.752669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:31:44.753062 systemd[1]: kubelet.service: Consumed 1.739s CPU time, 265.7M memory peak. Sep 11 00:31:53.189256 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:54774.service - OpenSSH per-connection server daemon (10.0.0.1:54774). Sep 11 00:31:53.249257 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 54774 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:31:53.250707 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:53.255035 systemd-logind[1539]: New session 4 of user core. Sep 11 00:31:53.270443 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:31:53.323909 sshd[1718]: Connection closed by 10.0.0.1 port 54774 Sep 11 00:31:53.324363 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:53.337761 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:54774.service: Deactivated successfully. Sep 11 00:31:53.339416 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:31:53.340131 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:31:53.342877 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:54782.service - OpenSSH per-connection server daemon (10.0.0.1:54782). Sep 11 00:31:53.343482 systemd-logind[1539]: Removed session 4. Sep 11 00:31:53.388788 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 54782 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:31:53.390227 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:53.394588 systemd-logind[1539]: New session 5 of user core. Sep 11 00:31:53.405432 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:31:53.454950 sshd[1726]: Connection closed by 10.0.0.1 port 54782 Sep 11 00:31:53.455279 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:53.469232 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:54782.service: Deactivated successfully. Sep 11 00:31:53.471318 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:31:53.472040 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:31:53.475039 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:54794.service - OpenSSH per-connection server daemon (10.0.0.1:54794). Sep 11 00:31:53.475860 systemd-logind[1539]: Removed session 5. Sep 11 00:31:53.528663 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 54794 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:31:53.529918 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:53.534328 systemd-logind[1539]: New session 6 of user core. Sep 11 00:31:53.548434 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:31:53.601772 sshd[1734]: Connection closed by 10.0.0.1 port 54794 Sep 11 00:31:53.602152 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:53.615003 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:54794.service: Deactivated successfully. Sep 11 00:31:53.616891 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:31:53.617611 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:31:53.620703 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:54810.service - OpenSSH per-connection server daemon (10.0.0.1:54810). Sep 11 00:31:53.621249 systemd-logind[1539]: Removed session 6. Sep 11 00:31:53.683624 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 54810 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:31:53.684999 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:53.689557 systemd-logind[1539]: New session 7 of user core. Sep 11 00:31:53.703427 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:31:53.762117 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:31:53.762490 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:31:53.780652 sudo[1743]: pam_unix(sudo:session): session closed for user root Sep 11 00:31:53.782316 sshd[1742]: Connection closed by 10.0.0.1 port 54810 Sep 11 00:31:53.782727 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:53.796117 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:54810.service: Deactivated successfully. Sep 11 00:31:53.798023 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:31:53.798841 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:31:53.801953 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:54820.service - OpenSSH per-connection server daemon (10.0.0.1:54820). Sep 11 00:31:53.802703 systemd-logind[1539]: Removed session 7. Sep 11 00:31:53.841274 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 54820 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:31:53.842725 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:53.847533 systemd-logind[1539]: New session 8 of user core. Sep 11 00:31:53.862453 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:31:53.916273 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:31:53.916620 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:31:53.977630 sudo[1753]: pam_unix(sudo:session): session closed for user root Sep 11 00:31:53.984890 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:31:53.985334 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:31:53.996738 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:31:54.058382 augenrules[1775]: No rules Sep 11 00:31:54.060275 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:31:54.060591 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:31:54.062037 sudo[1752]: pam_unix(sudo:session): session closed for user root Sep 11 00:31:54.063882 sshd[1751]: Connection closed by 10.0.0.1 port 54820 Sep 11 00:31:54.064252 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:54.082487 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:54820.service: Deactivated successfully. Sep 11 00:31:54.084377 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:31:54.085212 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:31:54.088247 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:54824.service - OpenSSH per-connection server daemon (10.0.0.1:54824). Sep 11 00:31:54.089079 systemd-logind[1539]: Removed session 8. Sep 11 00:31:54.141289 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 54824 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:31:54.142890 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:54.148097 systemd-logind[1539]: New session 9 of user core. Sep 11 00:31:54.157446 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:31:54.213287 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:31:54.213738 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:31:54.797021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:31:54.798626 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:31:54.799810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:31:54.813677 (dockerd)[1807]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:31:55.159215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:31:55.163735 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:31:55.251193 kubelet[1821]: E0911 00:31:55.251105 1821 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:31:55.258246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:31:55.258489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:31:55.258905 systemd[1]: kubelet.service: Consumed 335ms CPU time, 110.8M memory peak. Sep 11 00:31:55.266436 dockerd[1807]: time="2025-09-11T00:31:55.266354305Z" level=info msg="Starting up" Sep 11 00:31:55.267331 dockerd[1807]: time="2025-09-11T00:31:55.267291112Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:31:55.641976 dockerd[1807]: time="2025-09-11T00:31:55.641805566Z" level=info msg="Loading containers: start." Sep 11 00:31:55.653348 kernel: Initializing XFRM netlink socket Sep 11 00:31:56.057381 systemd-networkd[1468]: docker0: Link UP Sep 11 00:31:56.064592 dockerd[1807]: time="2025-09-11T00:31:56.064531117Z" level=info msg="Loading containers: done." Sep 11 00:31:56.084856 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2580933063-merged.mount: Deactivated successfully. Sep 11 00:31:56.087423 dockerd[1807]: time="2025-09-11T00:31:56.087363888Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:31:56.087512 dockerd[1807]: time="2025-09-11T00:31:56.087484103Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:31:56.087664 dockerd[1807]: time="2025-09-11T00:31:56.087639926Z" level=info msg="Initializing buildkit" Sep 11 00:31:56.124424 dockerd[1807]: time="2025-09-11T00:31:56.124338195Z" level=info msg="Completed buildkit initialization" Sep 11 00:31:56.131584 dockerd[1807]: time="2025-09-11T00:31:56.131518678Z" level=info msg="Daemon has completed initialization" Sep 11 00:31:56.131723 dockerd[1807]: time="2025-09-11T00:31:56.131619928Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:31:56.131954 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:31:57.257939 containerd[1555]: time="2025-09-11T00:31:57.257865859Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 11 00:31:58.112506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount816796858.mount: Deactivated successfully. Sep 11 00:32:00.848275 containerd[1555]: time="2025-09-11T00:32:00.848198772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:00.910801 containerd[1555]: time="2025-09-11T00:32:00.910713517Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 11 00:32:00.914969 containerd[1555]: time="2025-09-11T00:32:00.914932127Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:00.919570 containerd[1555]: time="2025-09-11T00:32:00.919529287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:00.920451 containerd[1555]: time="2025-09-11T00:32:00.920421650Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.662491491s" Sep 11 00:32:00.920492 containerd[1555]: time="2025-09-11T00:32:00.920451225Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 11 00:32:00.921499 containerd[1555]: time="2025-09-11T00:32:00.921475125Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 11 00:32:02.529129 containerd[1555]: time="2025-09-11T00:32:02.529043916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:02.529875 containerd[1555]: time="2025-09-11T00:32:02.529805765Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 11 00:32:02.531187 containerd[1555]: time="2025-09-11T00:32:02.531148943Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:02.533900 containerd[1555]: time="2025-09-11T00:32:02.533815112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:02.534882 containerd[1555]: time="2025-09-11T00:32:02.534851265Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.613350392s" Sep 11 00:32:02.534882 containerd[1555]: time="2025-09-11T00:32:02.534882464Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 11 00:32:02.535463 containerd[1555]: time="2025-09-11T00:32:02.535437094Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 11 00:32:04.453207 containerd[1555]: time="2025-09-11T00:32:04.453144645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:04.453959 containerd[1555]: time="2025-09-11T00:32:04.453927412Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 11 00:32:04.455169 containerd[1555]: time="2025-09-11T00:32:04.455130919Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:04.457698 containerd[1555]: time="2025-09-11T00:32:04.457662666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:04.458587 containerd[1555]: time="2025-09-11T00:32:04.458519452Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.923044177s" Sep 11 00:32:04.458587 containerd[1555]: time="2025-09-11T00:32:04.458575708Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 11 00:32:04.459131 containerd[1555]: time="2025-09-11T00:32:04.459101143Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 11 00:32:05.282636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:32:05.284669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:32:05.499491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:05.510596 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:32:05.570181 kubelet[2114]: E0911 00:32:05.570040 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:32:05.574244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:32:05.574526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:32:05.574956 systemd[1]: kubelet.service: Consumed 233ms CPU time, 110.7M memory peak. Sep 11 00:32:05.680493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367316804.mount: Deactivated successfully. Sep 11 00:32:06.769971 containerd[1555]: time="2025-09-11T00:32:06.769895943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:06.770620 containerd[1555]: time="2025-09-11T00:32:06.770586468Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 11 00:32:06.771794 containerd[1555]: time="2025-09-11T00:32:06.771760519Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:06.773810 containerd[1555]: time="2025-09-11T00:32:06.773762483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:06.774325 containerd[1555]: time="2025-09-11T00:32:06.774278220Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.315146809s" Sep 11 00:32:06.774362 containerd[1555]: time="2025-09-11T00:32:06.774329926Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 11 00:32:06.774917 containerd[1555]: time="2025-09-11T00:32:06.774867014Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 00:32:07.322089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2818013080.mount: Deactivated successfully. Sep 11 00:32:08.310142 containerd[1555]: time="2025-09-11T00:32:08.310077994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:08.310792 containerd[1555]: time="2025-09-11T00:32:08.310721671Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 11 00:32:08.312003 containerd[1555]: time="2025-09-11T00:32:08.311964681Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:08.314678 containerd[1555]: time="2025-09-11T00:32:08.314614940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:08.315826 containerd[1555]: time="2025-09-11T00:32:08.315782940Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.540873036s" Sep 11 00:32:08.315826 containerd[1555]: time="2025-09-11T00:32:08.315821623Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 11 00:32:08.316599 containerd[1555]: time="2025-09-11T00:32:08.316399036Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:32:09.429208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3095927672.mount: Deactivated successfully. Sep 11 00:32:09.436499 containerd[1555]: time="2025-09-11T00:32:09.436436796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:32:09.437409 containerd[1555]: time="2025-09-11T00:32:09.437345290Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 11 00:32:09.438730 containerd[1555]: time="2025-09-11T00:32:09.438689640Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:32:09.440740 containerd[1555]: time="2025-09-11T00:32:09.440693397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:32:09.441445 containerd[1555]: time="2025-09-11T00:32:09.441392799Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.124962805s" Sep 11 00:32:09.441445 containerd[1555]: time="2025-09-11T00:32:09.441442422Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:32:09.442062 containerd[1555]: time="2025-09-11T00:32:09.442037638Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 11 00:32:09.904648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396760142.mount: Deactivated successfully. Sep 11 00:32:12.172046 containerd[1555]: time="2025-09-11T00:32:12.171964951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:12.173075 containerd[1555]: time="2025-09-11T00:32:12.173021071Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 11 00:32:12.174454 containerd[1555]: time="2025-09-11T00:32:12.174399986Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:12.177444 containerd[1555]: time="2025-09-11T00:32:12.177392888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:12.178650 containerd[1555]: time="2025-09-11T00:32:12.178601935Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.736533469s" Sep 11 00:32:12.178650 containerd[1555]: time="2025-09-11T00:32:12.178643483Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 11 00:32:14.861849 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:14.862021 systemd[1]: kubelet.service: Consumed 233ms CPU time, 110.7M memory peak. Sep 11 00:32:14.864455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:32:14.892538 systemd[1]: Reload requested from client PID 2265 ('systemctl') (unit session-9.scope)... Sep 11 00:32:14.892552 systemd[1]: Reloading... Sep 11 00:32:14.989344 zram_generator::config[2310]: No configuration found. Sep 11 00:32:15.210679 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:32:15.331166 systemd[1]: Reloading finished in 438 ms. Sep 11 00:32:15.397966 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:32:15.398069 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:32:15.398407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:15.398453 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.3M memory peak. Sep 11 00:32:15.400097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:32:15.578947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:15.583435 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:32:15.627541 kubelet[2355]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:32:15.627541 kubelet[2355]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:32:15.627541 kubelet[2355]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:32:15.627985 kubelet[2355]: I0911 00:32:15.627598 2355 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:32:15.942462 kubelet[2355]: I0911 00:32:15.942400 2355 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 00:32:15.942462 kubelet[2355]: I0911 00:32:15.942431 2355 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:32:15.942699 kubelet[2355]: I0911 00:32:15.942679 2355 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 00:32:15.969721 kubelet[2355]: E0911 00:32:15.969675 2355 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:32:15.971747 kubelet[2355]: I0911 00:32:15.971550 2355 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:32:15.980522 kubelet[2355]: I0911 00:32:15.980479 2355 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:32:15.986806 kubelet[2355]: I0911 00:32:15.986762 2355 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:32:15.987120 kubelet[2355]: I0911 00:32:15.987062 2355 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:32:15.987381 kubelet[2355]: I0911 00:32:15.987109 2355 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:32:15.987561 kubelet[2355]: I0911 00:32:15.987394 2355 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:32:15.987561 kubelet[2355]: I0911 00:32:15.987406 2355 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 00:32:15.987620 kubelet[2355]: I0911 00:32:15.987612 2355 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:32:15.990683 kubelet[2355]: I0911 00:32:15.990654 2355 kubelet.go:446] "Attempting to sync node with API server" Sep 11 00:32:15.990722 kubelet[2355]: I0911 00:32:15.990714 2355 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:32:15.990789 kubelet[2355]: I0911 00:32:15.990761 2355 kubelet.go:352] "Adding apiserver pod source" Sep 11 00:32:15.990823 kubelet[2355]: I0911 00:32:15.990790 2355 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:32:15.996019 kubelet[2355]: I0911 00:32:15.995980 2355 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:32:15.997224 kubelet[2355]: I0911 00:32:15.996479 2355 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:32:15.997224 kubelet[2355]: W0911 00:32:15.996563 2355 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:32:15.997224 kubelet[2355]: W0911 00:32:15.996986 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Sep 11 00:32:15.997224 kubelet[2355]: W0911 00:32:15.997002 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Sep 11 00:32:15.997224 kubelet[2355]: E0911 00:32:15.997055 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:32:15.997224 kubelet[2355]: E0911 00:32:15.997060 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:32:15.999072 kubelet[2355]: I0911 00:32:15.999040 2355 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:32:15.999117 kubelet[2355]: I0911 00:32:15.999088 2355 server.go:1287] "Started kubelet" Sep 11 00:32:15.999179 kubelet[2355]: I0911 00:32:15.999151 2355 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:32:16.000250 kubelet[2355]: I0911 00:32:16.000222 2355 server.go:479] "Adding debug handlers to kubelet server" Sep 11 00:32:16.003480 kubelet[2355]: I0911 00:32:16.003447 2355 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:32:16.005277 kubelet[2355]: I0911 00:32:16.004988 2355 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:32:16.005354 kubelet[2355]: I0911 00:32:16.005291 2355 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:32:16.006454 kubelet[2355]: I0911 00:32:16.006428 2355 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:32:16.006743 kubelet[2355]: E0911 00:32:16.006659 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:16.007745 kubelet[2355]: I0911 00:32:16.006791 2355 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:32:16.007745 kubelet[2355]: I0911 00:32:16.006887 2355 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:32:16.007745 kubelet[2355]: W0911 00:32:16.007238 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Sep 11 00:32:16.007745 kubelet[2355]: E0911 00:32:16.007277 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:32:16.007745 kubelet[2355]: I0911 00:32:16.007386 2355 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:32:16.007745 kubelet[2355]: E0911 00:32:16.007456 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="200ms" Sep 11 00:32:16.007745 kubelet[2355]: I0911 00:32:16.007504 2355 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:32:16.008498 kubelet[2355]: I0911 00:32:16.008146 2355 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:32:16.010220 kubelet[2355]: I0911 00:32:16.010201 2355 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:32:16.010477 kubelet[2355]: E0911 00:32:16.008501 2355 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186413151df35278 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:32:15.999062648 +0000 UTC m=+0.411726206,LastTimestamp:2025-09-11 00:32:15.999062648 +0000 UTC m=+0.411726206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:32:16.010777 kubelet[2355]: E0911 00:32:16.010739 2355 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:32:16.025842 kubelet[2355]: I0911 00:32:16.025806 2355 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:32:16.025913 kubelet[2355]: I0911 00:32:16.025875 2355 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:32:16.025913 kubelet[2355]: I0911 00:32:16.025894 2355 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:32:16.032256 kubelet[2355]: I0911 00:32:16.032200 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:32:16.034311 kubelet[2355]: I0911 00:32:16.034261 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:32:16.034366 kubelet[2355]: I0911 00:32:16.034326 2355 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 00:32:16.034366 kubelet[2355]: I0911 00:32:16.034355 2355 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:32:16.034366 kubelet[2355]: I0911 00:32:16.034363 2355 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 00:32:16.034439 kubelet[2355]: E0911 00:32:16.034426 2355 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:32:16.107282 kubelet[2355]: E0911 00:32:16.107223 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:16.134542 kubelet[2355]: E0911 00:32:16.134488 2355 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:32:16.207909 kubelet[2355]: E0911 00:32:16.207714 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:16.208162 kubelet[2355]: E0911 00:32:16.208109 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="400ms" Sep 11 00:32:16.308752 kubelet[2355]: E0911 00:32:16.308672 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:16.335058 kubelet[2355]: E0911 00:32:16.335008 2355 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:32:16.409407 kubelet[2355]: E0911 00:32:16.409352 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:16.492412 kubelet[2355]: W0911 00:32:16.492229 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Sep 11 00:32:16.492412 kubelet[2355]: E0911 00:32:16.492350 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:32:16.492532 kubelet[2355]: I0911 00:32:16.492422 2355 policy_none.go:49] "None policy: Start" Sep 11 00:32:16.492532 kubelet[2355]: I0911 00:32:16.492452 2355 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:32:16.492532 kubelet[2355]: I0911 00:32:16.492473 2355 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:32:16.501266 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:32:16.510520 kubelet[2355]: E0911 00:32:16.510476 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:16.516026 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:32:16.520027 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:32:16.540786 kubelet[2355]: I0911 00:32:16.540682 2355 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:32:16.540990 kubelet[2355]: I0911 00:32:16.540978 2355 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:32:16.541020 kubelet[2355]: I0911 00:32:16.540990 2355 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:32:16.545336 kubelet[2355]: I0911 00:32:16.541222 2355 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:32:16.608020 kubelet[2355]: E0911 00:32:16.607982 2355 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:32:16.608178 kubelet[2355]: E0911 00:32:16.608047 2355 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 00:32:16.609499 kubelet[2355]: E0911 00:32:16.609460 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="800ms" Sep 11 00:32:16.643038 kubelet[2355]: I0911 00:32:16.642991 2355 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:32:16.643486 kubelet[2355]: E0911 00:32:16.643443 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Sep 11 00:32:16.645197 kubelet[2355]: E0911 00:32:16.645055 2355 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186413151df35278 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:32:15.999062648 +0000 UTC m=+0.411726206,LastTimestamp:2025-09-11 00:32:15.999062648 +0000 UTC m=+0.411726206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:32:16.744938 systemd[1]: Created slice kubepods-burstable-podfb219b0102c73bf247cd6e4da1d6c11d.slice - libcontainer container kubepods-burstable-podfb219b0102c73bf247cd6e4da1d6c11d.slice. Sep 11 00:32:16.765353 kubelet[2355]: E0911 00:32:16.765287 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:32:16.768638 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 11 00:32:16.787906 kubelet[2355]: E0911 00:32:16.787853 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:32:16.791338 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 11 00:32:16.793759 kubelet[2355]: E0911 00:32:16.793713 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:32:16.812514 kubelet[2355]: I0911 00:32:16.812452 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:16.812514 kubelet[2355]: I0911 00:32:16.812504 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:32:16.812724 kubelet[2355]: I0911 00:32:16.812535 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb219b0102c73bf247cd6e4da1d6c11d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fb219b0102c73bf247cd6e4da1d6c11d\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:32:16.812724 kubelet[2355]: I0911 00:32:16.812555 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb219b0102c73bf247cd6e4da1d6c11d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fb219b0102c73bf247cd6e4da1d6c11d\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:32:16.812724 kubelet[2355]: I0911 00:32:16.812573 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:16.812724 kubelet[2355]: I0911 00:32:16.812592 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:16.812724 kubelet[2355]: I0911 00:32:16.812613 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:16.812859 kubelet[2355]: I0911 00:32:16.812634 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:16.812859 kubelet[2355]: I0911 00:32:16.812654 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb219b0102c73bf247cd6e4da1d6c11d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fb219b0102c73bf247cd6e4da1d6c11d\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:32:16.845988 kubelet[2355]: I0911 00:32:16.845942 2355 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:32:16.846479 kubelet[2355]: E0911 00:32:16.846444 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Sep 11 00:32:17.067140 kubelet[2355]: E0911 00:32:17.066975 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:17.068161 containerd[1555]: time="2025-09-11T00:32:17.068060336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fb219b0102c73bf247cd6e4da1d6c11d,Namespace:kube-system,Attempt:0,}" Sep 11 00:32:17.089170 kubelet[2355]: E0911 00:32:17.089122 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:17.089721 containerd[1555]: time="2025-09-11T00:32:17.089667596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 11 00:32:17.094884 kubelet[2355]: E0911 00:32:17.094861 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:17.098315 containerd[1555]: time="2025-09-11T00:32:17.098238560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 11 00:32:17.222215 kubelet[2355]: W0911 00:32:17.222154 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Sep 11 00:32:17.222363 kubelet[2355]: E0911 00:32:17.222221 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:32:17.248640 kubelet[2355]: I0911 00:32:17.248602 2355 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:32:17.249087 kubelet[2355]: E0911 00:32:17.249042 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Sep 11 00:32:17.286188 containerd[1555]: time="2025-09-11T00:32:17.286071034Z" level=info msg="connecting to shim 515fae20524f65470acaa162ed4d7608003a3f7c616d11199727d19b715c016e" address="unix:///run/containerd/s/80677d1bd21b85ab6669921ac78753cffdf8277d8c3effd1e1a1faf17e1199fd" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:32:17.289363 containerd[1555]: time="2025-09-11T00:32:17.289324931Z" level=info msg="connecting to shim b16191da8b19ca5243a12e8a1a9ca48ba6f9363eac451524da7234df4bdd678e" address="unix:///run/containerd/s/ccff85493edcf26fbcbd085e6e7fec4fe6ee07e70f970e65ac76b6ffd24d66df" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:32:17.291210 containerd[1555]: time="2025-09-11T00:32:17.290714815Z" level=info msg="connecting to shim c85689822d3f74195bdd66a82ba6c84f1b56f288bd639ce2be57293a9a60b0d1" address="unix:///run/containerd/s/62bf6733ed53624acbe57bb3097ef1c88314120dce391299adbc33375db0328b" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:32:17.323593 systemd[1]: Started cri-containerd-c85689822d3f74195bdd66a82ba6c84f1b56f288bd639ce2be57293a9a60b0d1.scope - libcontainer container c85689822d3f74195bdd66a82ba6c84f1b56f288bd639ce2be57293a9a60b0d1. Sep 11 00:32:17.329510 systemd[1]: Started cri-containerd-515fae20524f65470acaa162ed4d7608003a3f7c616d11199727d19b715c016e.scope - libcontainer container 515fae20524f65470acaa162ed4d7608003a3f7c616d11199727d19b715c016e. Sep 11 00:32:17.332922 systemd[1]: Started cri-containerd-b16191da8b19ca5243a12e8a1a9ca48ba6f9363eac451524da7234df4bdd678e.scope - libcontainer container b16191da8b19ca5243a12e8a1a9ca48ba6f9363eac451524da7234df4bdd678e. Sep 11 00:32:17.395923 kubelet[2355]: W0911 00:32:17.395777 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Sep 11 00:32:17.395923 kubelet[2355]: E0911 00:32:17.395858 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:32:17.410856 kubelet[2355]: E0911 00:32:17.410795 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="1.6s" Sep 11 00:32:17.539175 containerd[1555]: time="2025-09-11T00:32:17.539082632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"c85689822d3f74195bdd66a82ba6c84f1b56f288bd639ce2be57293a9a60b0d1\"" Sep 11 00:32:17.540477 containerd[1555]: time="2025-09-11T00:32:17.540436478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"515fae20524f65470acaa162ed4d7608003a3f7c616d11199727d19b715c016e\"" Sep 11 00:32:17.540670 kubelet[2355]: E0911 00:32:17.540451 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:17.541122 kubelet[2355]: E0911 00:32:17.541071 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:17.542447 containerd[1555]: time="2025-09-11T00:32:17.542417883Z" level=info msg="CreateContainer within sandbox \"c85689822d3f74195bdd66a82ba6c84f1b56f288bd639ce2be57293a9a60b0d1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:32:17.543434 containerd[1555]: time="2025-09-11T00:32:17.543398026Z" level=info msg="CreateContainer within sandbox \"515fae20524f65470acaa162ed4d7608003a3f7c616d11199727d19b715c016e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:32:17.543659 containerd[1555]: time="2025-09-11T00:32:17.543634176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fb219b0102c73bf247cd6e4da1d6c11d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b16191da8b19ca5243a12e8a1a9ca48ba6f9363eac451524da7234df4bdd678e\"" Sep 11 00:32:17.544190 kubelet[2355]: E0911 00:32:17.544158 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:17.545395 containerd[1555]: time="2025-09-11T00:32:17.545365074Z" level=info msg="CreateContainer within sandbox \"b16191da8b19ca5243a12e8a1a9ca48ba6f9363eac451524da7234df4bdd678e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:32:17.559705 containerd[1555]: time="2025-09-11T00:32:17.559632501Z" level=info msg="Container febb71f6f3804f816286766006f040b3078eda0fa9b55928b9f281f2fff58500: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:17.563230 containerd[1555]: time="2025-09-11T00:32:17.563182543Z" level=info msg="Container be441cc658550fb5ef7da80da2d6c772fc5247edd9d02a6dad65034cd9158e40: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:17.569234 containerd[1555]: time="2025-09-11T00:32:17.569198517Z" level=info msg="Container 14006e1788357659b6e165264cd8539959f8d91e88132becf797352dc33549a7: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:17.575259 containerd[1555]: time="2025-09-11T00:32:17.575137292Z" level=info msg="CreateContainer within sandbox \"515fae20524f65470acaa162ed4d7608003a3f7c616d11199727d19b715c016e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be441cc658550fb5ef7da80da2d6c772fc5247edd9d02a6dad65034cd9158e40\"" Sep 11 00:32:17.575839 containerd[1555]: time="2025-09-11T00:32:17.575809606Z" level=info msg="StartContainer for \"be441cc658550fb5ef7da80da2d6c772fc5247edd9d02a6dad65034cd9158e40\"" Sep 11 00:32:17.576405 containerd[1555]: time="2025-09-11T00:32:17.576364997Z" level=info msg="CreateContainer within sandbox \"c85689822d3f74195bdd66a82ba6c84f1b56f288bd639ce2be57293a9a60b0d1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"febb71f6f3804f816286766006f040b3078eda0fa9b55928b9f281f2fff58500\"" Sep 11 00:32:17.576733 containerd[1555]: time="2025-09-11T00:32:17.576695128Z" level=info msg="StartContainer for \"febb71f6f3804f816286766006f040b3078eda0fa9b55928b9f281f2fff58500\"" Sep 11 00:32:17.577561 containerd[1555]: time="2025-09-11T00:32:17.577532348Z" level=info msg="connecting to shim be441cc658550fb5ef7da80da2d6c772fc5247edd9d02a6dad65034cd9158e40" address="unix:///run/containerd/s/80677d1bd21b85ab6669921ac78753cffdf8277d8c3effd1e1a1faf17e1199fd" protocol=ttrpc version=3 Sep 11 00:32:17.580347 containerd[1555]: time="2025-09-11T00:32:17.580286589Z" level=info msg="CreateContainer within sandbox \"b16191da8b19ca5243a12e8a1a9ca48ba6f9363eac451524da7234df4bdd678e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"14006e1788357659b6e165264cd8539959f8d91e88132becf797352dc33549a7\"" Sep 11 00:32:17.580726 containerd[1555]: time="2025-09-11T00:32:17.580698236Z" level=info msg="StartContainer for \"14006e1788357659b6e165264cd8539959f8d91e88132becf797352dc33549a7\"" Sep 11 00:32:17.581502 containerd[1555]: time="2025-09-11T00:32:17.581462496Z" level=info msg="connecting to shim febb71f6f3804f816286766006f040b3078eda0fa9b55928b9f281f2fff58500" address="unix:///run/containerd/s/62bf6733ed53624acbe57bb3097ef1c88314120dce391299adbc33375db0328b" protocol=ttrpc version=3 Sep 11 00:32:17.581991 containerd[1555]: time="2025-09-11T00:32:17.581956099Z" level=info msg="connecting to shim 14006e1788357659b6e165264cd8539959f8d91e88132becf797352dc33549a7" address="unix:///run/containerd/s/ccff85493edcf26fbcbd085e6e7fec4fe6ee07e70f970e65ac76b6ffd24d66df" protocol=ttrpc version=3 Sep 11 00:32:17.595494 kubelet[2355]: W0911 00:32:17.595417 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Sep 11 00:32:17.595637 kubelet[2355]: E0911 00:32:17.595499 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:32:17.604574 systemd[1]: Started cri-containerd-febb71f6f3804f816286766006f040b3078eda0fa9b55928b9f281f2fff58500.scope - libcontainer container febb71f6f3804f816286766006f040b3078eda0fa9b55928b9f281f2fff58500. Sep 11 00:32:17.614466 systemd[1]: Started cri-containerd-14006e1788357659b6e165264cd8539959f8d91e88132becf797352dc33549a7.scope - libcontainer container 14006e1788357659b6e165264cd8539959f8d91e88132becf797352dc33549a7. Sep 11 00:32:17.616423 systemd[1]: Started cri-containerd-be441cc658550fb5ef7da80da2d6c772fc5247edd9d02a6dad65034cd9158e40.scope - libcontainer container be441cc658550fb5ef7da80da2d6c772fc5247edd9d02a6dad65034cd9158e40. Sep 11 00:32:17.684081 containerd[1555]: time="2025-09-11T00:32:17.684027703Z" level=info msg="StartContainer for \"febb71f6f3804f816286766006f040b3078eda0fa9b55928b9f281f2fff58500\" returns successfully" Sep 11 00:32:17.690736 containerd[1555]: time="2025-09-11T00:32:17.690660664Z" level=info msg="StartContainer for \"14006e1788357659b6e165264cd8539959f8d91e88132becf797352dc33549a7\" returns successfully" Sep 11 00:32:17.709251 containerd[1555]: time="2025-09-11T00:32:17.709181337Z" level=info msg="StartContainer for \"be441cc658550fb5ef7da80da2d6c772fc5247edd9d02a6dad65034cd9158e40\" returns successfully" Sep 11 00:32:18.045723 kubelet[2355]: E0911 00:32:18.045684 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:32:18.046136 kubelet[2355]: E0911 00:32:18.045840 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:18.050486 kubelet[2355]: I0911 00:32:18.050462 2355 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:32:18.051191 kubelet[2355]: E0911 00:32:18.050473 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:32:18.051191 kubelet[2355]: E0911 00:32:18.051115 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:18.054814 kubelet[2355]: E0911 00:32:18.054795 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:32:18.055017 kubelet[2355]: E0911 00:32:18.055003 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:19.047864 kubelet[2355]: E0911 00:32:19.047799 2355 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 00:32:19.055561 kubelet[2355]: E0911 00:32:19.055515 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:32:19.055729 kubelet[2355]: E0911 00:32:19.055668 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:19.055729 kubelet[2355]: E0911 00:32:19.055668 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:32:19.055829 kubelet[2355]: E0911 00:32:19.055802 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:19.195320 kubelet[2355]: I0911 00:32:19.195238 2355 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:32:19.195320 kubelet[2355]: E0911 00:32:19.195276 2355 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 11 00:32:19.208519 kubelet[2355]: E0911 00:32:19.208183 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:19.309426 kubelet[2355]: E0911 00:32:19.309244 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:19.410081 kubelet[2355]: E0911 00:32:19.410013 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:19.510606 kubelet[2355]: E0911 00:32:19.510526 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:19.611947 kubelet[2355]: E0911 00:32:19.611687 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:19.712849 kubelet[2355]: E0911 00:32:19.712760 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:19.813812 kubelet[2355]: E0911 00:32:19.813700 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:19.914594 kubelet[2355]: E0911 00:32:19.914541 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:20.014958 kubelet[2355]: E0911 00:32:20.014883 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:32:20.110111 kubelet[2355]: I0911 00:32:20.110053 2355 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:32:20.118784 kubelet[2355]: I0911 00:32:20.118734 2355 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:20.122920 kubelet[2355]: I0911 00:32:20.122866 2355 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:32:20.995362 kubelet[2355]: I0911 00:32:20.995292 2355 apiserver.go:52] "Watching apiserver" Sep 11 00:32:20.997103 kubelet[2355]: E0911 00:32:20.997081 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:20.997191 kubelet[2355]: E0911 00:32:20.997149 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:20.997337 kubelet[2355]: E0911 00:32:20.997281 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:21.007715 kubelet[2355]: I0911 00:32:21.007671 2355 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:32:21.524328 systemd[1]: Reload requested from client PID 2631 ('systemctl') (unit session-9.scope)... Sep 11 00:32:21.524345 systemd[1]: Reloading... Sep 11 00:32:21.605338 zram_generator::config[2674]: No configuration found. Sep 11 00:32:21.695714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:32:21.833847 systemd[1]: Reloading finished in 309 ms. Sep 11 00:32:21.867192 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:32:21.892026 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:32:21.892336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:21.892392 systemd[1]: kubelet.service: Consumed 1.041s CPU time, 132M memory peak. Sep 11 00:32:21.894401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:32:22.100622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:22.105290 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:32:22.147334 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:32:22.147334 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:32:22.147334 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:32:22.147334 kubelet[2719]: I0911 00:32:22.147067 2719 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:32:22.156949 kubelet[2719]: I0911 00:32:22.156890 2719 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 00:32:22.156949 kubelet[2719]: I0911 00:32:22.156933 2719 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:32:22.158320 kubelet[2719]: I0911 00:32:22.157228 2719 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 00:32:22.161112 kubelet[2719]: I0911 00:32:22.161070 2719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 00:32:22.169782 kubelet[2719]: I0911 00:32:22.169703 2719 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:32:22.177508 kubelet[2719]: I0911 00:32:22.177427 2719 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:32:22.182829 kubelet[2719]: I0911 00:32:22.182784 2719 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:32:22.183120 kubelet[2719]: I0911 00:32:22.183061 2719 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:32:22.183286 kubelet[2719]: I0911 00:32:22.183107 2719 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:32:22.183286 kubelet[2719]: I0911 00:32:22.183288 2719 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:32:22.183466 kubelet[2719]: I0911 00:32:22.183314 2719 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 00:32:22.183466 kubelet[2719]: I0911 00:32:22.183380 2719 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:32:22.183584 kubelet[2719]: I0911 00:32:22.183564 2719 kubelet.go:446] "Attempting to sync node with API server" Sep 11 00:32:22.183616 kubelet[2719]: I0911 00:32:22.183593 2719 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:32:22.183616 kubelet[2719]: I0911 00:32:22.183615 2719 kubelet.go:352] "Adding apiserver pod source" Sep 11 00:32:22.183660 kubelet[2719]: I0911 00:32:22.183626 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:32:22.184669 kubelet[2719]: I0911 00:32:22.184594 2719 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:32:22.184945 kubelet[2719]: I0911 00:32:22.184923 2719 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:32:22.185375 kubelet[2719]: I0911 00:32:22.185346 2719 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:32:22.185375 kubelet[2719]: I0911 00:32:22.185379 2719 server.go:1287] "Started kubelet" Sep 11 00:32:22.188439 kubelet[2719]: I0911 00:32:22.188407 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:32:22.192769 kubelet[2719]: E0911 00:32:22.192743 2719 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:32:22.195318 kubelet[2719]: I0911 00:32:22.194428 2719 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:32:22.195370 kubelet[2719]: I0911 00:32:22.195271 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:32:22.195639 kubelet[2719]: I0911 00:32:22.195611 2719 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:32:22.196722 kubelet[2719]: I0911 00:32:22.196697 2719 server.go:479] "Adding debug handlers to kubelet server" Sep 11 00:32:22.198495 kubelet[2719]: I0911 00:32:22.198472 2719 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:32:22.200670 kubelet[2719]: I0911 00:32:22.199715 2719 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:32:22.200670 kubelet[2719]: I0911 00:32:22.199901 2719 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:32:22.201400 kubelet[2719]: I0911 00:32:22.201374 2719 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:32:22.203565 kubelet[2719]: I0911 00:32:22.203530 2719 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:32:22.204937 kubelet[2719]: I0911 00:32:22.204916 2719 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:32:22.204937 kubelet[2719]: I0911 00:32:22.204932 2719 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:32:22.210955 kubelet[2719]: I0911 00:32:22.210800 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:32:22.212101 kubelet[2719]: I0911 00:32:22.212073 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:32:22.212189 kubelet[2719]: I0911 00:32:22.212177 2719 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 00:32:22.212278 kubelet[2719]: I0911 00:32:22.212266 2719 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:32:22.212367 kubelet[2719]: I0911 00:32:22.212354 2719 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 00:32:22.212504 kubelet[2719]: E0911 00:32:22.212481 2719 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:32:22.245661 kubelet[2719]: I0911 00:32:22.245622 2719 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:32:22.245661 kubelet[2719]: I0911 00:32:22.245667 2719 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:32:22.245992 kubelet[2719]: I0911 00:32:22.245689 2719 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:32:22.245992 kubelet[2719]: I0911 00:32:22.245970 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:32:22.246047 kubelet[2719]: I0911 00:32:22.246016 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:32:22.246047 kubelet[2719]: I0911 00:32:22.246037 2719 policy_none.go:49] "None policy: Start" Sep 11 00:32:22.246047 kubelet[2719]: I0911 00:32:22.246047 2719 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:32:22.246235 kubelet[2719]: I0911 00:32:22.246060 2719 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:32:22.246413 kubelet[2719]: I0911 00:32:22.246397 2719 state_mem.go:75] "Updated machine memory state" Sep 11 00:32:22.251084 kubelet[2719]: I0911 00:32:22.250657 2719 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:32:22.251084 kubelet[2719]: I0911 00:32:22.250868 2719 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:32:22.251084 kubelet[2719]: I0911 00:32:22.250879 2719 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:32:22.251084 kubelet[2719]: I0911 00:32:22.251034 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:32:22.252056 kubelet[2719]: E0911 00:32:22.252032 2719 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:32:22.313842 kubelet[2719]: I0911 00:32:22.313744 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:32:22.313842 kubelet[2719]: I0911 00:32:22.313833 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:32:22.314035 kubelet[2719]: I0911 00:32:22.313755 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:22.358812 kubelet[2719]: I0911 00:32:22.358706 2719 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:32:22.367078 kubelet[2719]: E0911 00:32:22.367036 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:32:22.367253 kubelet[2719]: E0911 00:32:22.367126 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 00:32:22.367253 kubelet[2719]: E0911 00:32:22.367163 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:22.372589 kubelet[2719]: I0911 00:32:22.372535 2719 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 11 00:32:22.372691 kubelet[2719]: I0911 00:32:22.372630 2719 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:32:22.500679 kubelet[2719]: I0911 00:32:22.500630 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb219b0102c73bf247cd6e4da1d6c11d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fb219b0102c73bf247cd6e4da1d6c11d\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:32:22.500679 kubelet[2719]: I0911 00:32:22.500678 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:22.500896 kubelet[2719]: I0911 00:32:22.500703 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:22.500896 kubelet[2719]: I0911 00:32:22.500729 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:22.500896 kubelet[2719]: I0911 00:32:22.500757 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:32:22.500896 kubelet[2719]: I0911 00:32:22.500787 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb219b0102c73bf247cd6e4da1d6c11d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fb219b0102c73bf247cd6e4da1d6c11d\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:32:22.500896 kubelet[2719]: I0911 00:32:22.500806 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:22.501009 kubelet[2719]: I0911 00:32:22.500824 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:32:22.501009 kubelet[2719]: I0911 00:32:22.500847 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb219b0102c73bf247cd6e4da1d6c11d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fb219b0102c73bf247cd6e4da1d6c11d\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:32:22.526072 sudo[2756]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 11 00:32:22.526443 sudo[2756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 11 00:32:22.668407 kubelet[2719]: E0911 00:32:22.668206 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:22.668407 kubelet[2719]: E0911 00:32:22.668217 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:22.668853 kubelet[2719]: E0911 00:32:22.668483 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:23.101791 sudo[2756]: pam_unix(sudo:session): session closed for user root Sep 11 00:32:23.184856 kubelet[2719]: I0911 00:32:23.184802 2719 apiserver.go:52] "Watching apiserver" Sep 11 00:32:23.200886 kubelet[2719]: I0911 00:32:23.200833 2719 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:32:23.229030 kubelet[2719]: I0911 00:32:23.228289 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:32:23.229030 kubelet[2719]: E0911 00:32:23.228366 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:23.229030 kubelet[2719]: E0911 00:32:23.228549 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:23.234518 kubelet[2719]: E0911 00:32:23.234481 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 00:32:23.234747 kubelet[2719]: E0911 00:32:23.234597 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:23.251244 kubelet[2719]: I0911 00:32:23.251179 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.251151342 podStartE2EDuration="3.251151342s" podCreationTimestamp="2025-09-11 00:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:32:23.24527357 +0000 UTC m=+1.135194573" watchObservedRunningTime="2025-09-11 00:32:23.251151342 +0000 UTC m=+1.141072345" Sep 11 00:32:23.257523 kubelet[2719]: I0911 00:32:23.257467 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.257449303 podStartE2EDuration="3.257449303s" podCreationTimestamp="2025-09-11 00:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:32:23.257356487 +0000 UTC m=+1.147277510" watchObservedRunningTime="2025-09-11 00:32:23.257449303 +0000 UTC m=+1.147370306" Sep 11 00:32:23.257968 kubelet[2719]: I0911 00:32:23.257562 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.2575583 podStartE2EDuration="3.2575583s" podCreationTimestamp="2025-09-11 00:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:32:23.251383954 +0000 UTC m=+1.141304957" watchObservedRunningTime="2025-09-11 00:32:23.2575583 +0000 UTC m=+1.147479303" Sep 11 00:32:24.230486 kubelet[2719]: E0911 00:32:24.230439 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:24.231186 kubelet[2719]: E0911 00:32:24.230918 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:24.231186 kubelet[2719]: E0911 00:32:24.231006 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:24.513801 sudo[1787]: pam_unix(sudo:session): session closed for user root Sep 11 00:32:24.515610 sshd[1786]: Connection closed by 10.0.0.1 port 54824 Sep 11 00:32:24.516093 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:24.520470 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:54824.service: Deactivated successfully. Sep 11 00:32:24.522740 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:32:24.522980 systemd[1]: session-9.scope: Consumed 5.387s CPU time, 260.4M memory peak. Sep 11 00:32:24.524578 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:32:24.526118 systemd-logind[1539]: Removed session 9. Sep 11 00:32:26.406432 update_engine[1540]: I20250911 00:32:26.406358 1540 update_attempter.cc:509] Updating boot flags... Sep 11 00:32:26.697423 kubelet[2719]: I0911 00:32:26.697377 2719 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:32:26.697907 containerd[1555]: time="2025-09-11T00:32:26.697861345Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:32:26.698184 kubelet[2719]: I0911 00:32:26.698089 2719 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:32:27.526195 systemd[1]: Created slice kubepods-besteffort-podbb35ae83_0720_4b1f_9fff_76378ec555cd.slice - libcontainer container kubepods-besteffort-podbb35ae83_0720_4b1f_9fff_76378ec555cd.slice. Sep 11 00:32:27.538471 kubelet[2719]: I0911 00:32:27.538282 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbsll\" (UniqueName: \"kubernetes.io/projected/bb35ae83-0720-4b1f-9fff-76378ec555cd-kube-api-access-cbsll\") pod \"kube-proxy-f9wr5\" (UID: \"bb35ae83-0720-4b1f-9fff-76378ec555cd\") " pod="kube-system/kube-proxy-f9wr5" Sep 11 00:32:27.538471 kubelet[2719]: I0911 00:32:27.538355 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bb35ae83-0720-4b1f-9fff-76378ec555cd-kube-proxy\") pod \"kube-proxy-f9wr5\" (UID: \"bb35ae83-0720-4b1f-9fff-76378ec555cd\") " pod="kube-system/kube-proxy-f9wr5" Sep 11 00:32:27.538471 kubelet[2719]: I0911 00:32:27.538398 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb35ae83-0720-4b1f-9fff-76378ec555cd-xtables-lock\") pod \"kube-proxy-f9wr5\" (UID: \"bb35ae83-0720-4b1f-9fff-76378ec555cd\") " pod="kube-system/kube-proxy-f9wr5" Sep 11 00:32:27.538471 kubelet[2719]: I0911 00:32:27.538416 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb35ae83-0720-4b1f-9fff-76378ec555cd-lib-modules\") pod \"kube-proxy-f9wr5\" (UID: \"bb35ae83-0720-4b1f-9fff-76378ec555cd\") " pod="kube-system/kube-proxy-f9wr5" Sep 11 00:32:27.539885 systemd[1]: Created slice kubepods-burstable-pod2c32543b_f742_4168_a488_aa704f470137.slice - libcontainer container kubepods-burstable-pod2c32543b_f742_4168_a488_aa704f470137.slice. Sep 11 00:32:27.639223 kubelet[2719]: I0911 00:32:27.639151 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c32543b-f742-4168-a488-aa704f470137-hubble-tls\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639223 kubelet[2719]: I0911 00:32:27.639198 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-etc-cni-netd\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639223 kubelet[2719]: I0911 00:32:27.639223 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-bpf-maps\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639223 kubelet[2719]: I0911 00:32:27.639236 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-hostproc\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639467 kubelet[2719]: I0911 00:32:27.639250 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-cni-path\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639467 kubelet[2719]: I0911 00:32:27.639268 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-cilium-cgroup\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639467 kubelet[2719]: I0911 00:32:27.639285 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-host-proc-sys-kernel\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639467 kubelet[2719]: I0911 00:32:27.639371 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-lib-modules\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639467 kubelet[2719]: I0911 00:32:27.639439 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-xtables-lock\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639467 kubelet[2719]: I0911 00:32:27.639464 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c32543b-f742-4168-a488-aa704f470137-clustermesh-secrets\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639617 kubelet[2719]: I0911 00:32:27.639527 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c32543b-f742-4168-a488-aa704f470137-cilium-config-path\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639617 kubelet[2719]: I0911 00:32:27.639557 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w748l\" (UniqueName: \"kubernetes.io/projected/2c32543b-f742-4168-a488-aa704f470137-kube-api-access-w748l\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639617 kubelet[2719]: I0911 00:32:27.639584 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-cilium-run\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.639617 kubelet[2719]: I0911 00:32:27.639602 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-host-proc-sys-net\") pod \"cilium-7zl7d\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " pod="kube-system/cilium-7zl7d" Sep 11 00:32:27.722463 systemd[1]: Created slice kubepods-besteffort-pod66f39d01_e703_4de2_bdd6_245edf607477.slice - libcontainer container kubepods-besteffort-pod66f39d01_e703_4de2_bdd6_245edf607477.slice. Sep 11 00:32:27.740556 kubelet[2719]: I0911 00:32:27.740456 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2s9\" (UniqueName: \"kubernetes.io/projected/66f39d01-e703-4de2-bdd6-245edf607477-kube-api-access-ms2s9\") pod \"cilium-operator-6c4d7847fc-kn5zf\" (UID: \"66f39d01-e703-4de2-bdd6-245edf607477\") " pod="kube-system/cilium-operator-6c4d7847fc-kn5zf" Sep 11 00:32:27.741135 kubelet[2719]: I0911 00:32:27.740583 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66f39d01-e703-4de2-bdd6-245edf607477-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kn5zf\" (UID: \"66f39d01-e703-4de2-bdd6-245edf607477\") " pod="kube-system/cilium-operator-6c4d7847fc-kn5zf" Sep 11 00:32:27.852000 kubelet[2719]: E0911 00:32:27.851873 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:27.852626 containerd[1555]: time="2025-09-11T00:32:27.852577534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9wr5,Uid:bb35ae83-0720-4b1f-9fff-76378ec555cd,Namespace:kube-system,Attempt:0,}" Sep 11 00:32:27.927240 containerd[1555]: time="2025-09-11T00:32:27.927163870Z" level=info msg="connecting to shim cbed7c18680f63e7cc22b7f06f410886eeb9570ebbd252b7966932dabad4caad" address="unix:///run/containerd/s/d605899bb672fe83e0a10ce5490068553af1b4840f6078d18ff90326017bf6a8" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:32:27.996486 systemd[1]: Started cri-containerd-cbed7c18680f63e7cc22b7f06f410886eeb9570ebbd252b7966932dabad4caad.scope - libcontainer container cbed7c18680f63e7cc22b7f06f410886eeb9570ebbd252b7966932dabad4caad. Sep 11 00:32:28.014927 kubelet[2719]: E0911 00:32:28.014904 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:28.027452 kubelet[2719]: E0911 00:32:28.026672 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:28.028477 containerd[1555]: time="2025-09-11T00:32:28.028437690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kn5zf,Uid:66f39d01-e703-4de2-bdd6-245edf607477,Namespace:kube-system,Attempt:0,}" Sep 11 00:32:28.030278 containerd[1555]: time="2025-09-11T00:32:28.030238608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9wr5,Uid:bb35ae83-0720-4b1f-9fff-76378ec555cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbed7c18680f63e7cc22b7f06f410886eeb9570ebbd252b7966932dabad4caad\"" Sep 11 00:32:28.031092 kubelet[2719]: E0911 00:32:28.031064 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:28.033662 containerd[1555]: time="2025-09-11T00:32:28.033632170Z" level=info msg="CreateContainer within sandbox \"cbed7c18680f63e7cc22b7f06f410886eeb9570ebbd252b7966932dabad4caad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:32:28.047911 containerd[1555]: time="2025-09-11T00:32:28.047883480Z" level=info msg="Container 8c52ab141a4633c6323034b87e556989a2827a1791249a8d94d53ec3c50f75c8: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:28.053955 containerd[1555]: time="2025-09-11T00:32:28.053930252Z" level=info msg="connecting to shim 499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7" address="unix:///run/containerd/s/845023e1dddfaec4c98e9bddd74df5e34ecddeb79443c0454010edf243abe793" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:32:28.058131 containerd[1555]: time="2025-09-11T00:32:28.058087731Z" level=info msg="CreateContainer within sandbox \"cbed7c18680f63e7cc22b7f06f410886eeb9570ebbd252b7966932dabad4caad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8c52ab141a4633c6323034b87e556989a2827a1791249a8d94d53ec3c50f75c8\"" Sep 11 00:32:28.059784 containerd[1555]: time="2025-09-11T00:32:28.058698377Z" level=info msg="StartContainer for \"8c52ab141a4633c6323034b87e556989a2827a1791249a8d94d53ec3c50f75c8\"" Sep 11 00:32:28.060014 containerd[1555]: time="2025-09-11T00:32:28.059994679Z" level=info msg="connecting to shim 8c52ab141a4633c6323034b87e556989a2827a1791249a8d94d53ec3c50f75c8" address="unix:///run/containerd/s/d605899bb672fe83e0a10ce5490068553af1b4840f6078d18ff90326017bf6a8" protocol=ttrpc version=3 Sep 11 00:32:28.080454 systemd[1]: Started cri-containerd-499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7.scope - libcontainer container 499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7. Sep 11 00:32:28.084619 systemd[1]: Started cri-containerd-8c52ab141a4633c6323034b87e556989a2827a1791249a8d94d53ec3c50f75c8.scope - libcontainer container 8c52ab141a4633c6323034b87e556989a2827a1791249a8d94d53ec3c50f75c8. Sep 11 00:32:28.132734 containerd[1555]: time="2025-09-11T00:32:28.132610220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kn5zf,Uid:66f39d01-e703-4de2-bdd6-245edf607477,Namespace:kube-system,Attempt:0,} returns sandbox id \"499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7\"" Sep 11 00:32:28.134642 containerd[1555]: time="2025-09-11T00:32:28.134596097Z" level=info msg="StartContainer for \"8c52ab141a4633c6323034b87e556989a2827a1791249a8d94d53ec3c50f75c8\" returns successfully" Sep 11 00:32:28.135088 kubelet[2719]: E0911 00:32:28.134876 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:28.136710 containerd[1555]: time="2025-09-11T00:32:28.136649433Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 11 00:32:28.144505 kubelet[2719]: E0911 00:32:28.144481 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:28.145026 containerd[1555]: time="2025-09-11T00:32:28.144989076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7zl7d,Uid:2c32543b-f742-4168-a488-aa704f470137,Namespace:kube-system,Attempt:0,}" Sep 11 00:32:28.165214 containerd[1555]: time="2025-09-11T00:32:28.165101056Z" level=info msg="connecting to shim e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8" address="unix:///run/containerd/s/52c9c9234dffeabc9f56b5f92f2a9ea121ad8781b18068464f6f0dd1651a9e94" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:32:28.197464 systemd[1]: Started cri-containerd-e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8.scope - libcontainer container e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8. Sep 11 00:32:28.226832 containerd[1555]: time="2025-09-11T00:32:28.226789737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7zl7d,Uid:2c32543b-f742-4168-a488-aa704f470137,Namespace:kube-system,Attempt:0,} returns sandbox id \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\"" Sep 11 00:32:28.227714 kubelet[2719]: E0911 00:32:28.227682 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:28.242017 kubelet[2719]: E0911 00:32:28.241969 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:28.243138 kubelet[2719]: E0911 00:32:28.243109 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:28.252187 kubelet[2719]: I0911 00:32:28.252126 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f9wr5" podStartSLOduration=1.252110125 podStartE2EDuration="1.252110125s" podCreationTimestamp="2025-09-11 00:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:32:28.252089666 +0000 UTC m=+6.142010669" watchObservedRunningTime="2025-09-11 00:32:28.252110125 +0000 UTC m=+6.142031128" Sep 11 00:32:29.243828 kubelet[2719]: E0911 00:32:29.243763 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:29.530190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432294365.mount: Deactivated successfully. Sep 11 00:32:29.940529 containerd[1555]: time="2025-09-11T00:32:29.940471060Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:29.941183 containerd[1555]: time="2025-09-11T00:32:29.941131630Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 11 00:32:29.942433 containerd[1555]: time="2025-09-11T00:32:29.942384039Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:29.943700 containerd[1555]: time="2025-09-11T00:32:29.943627621Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.806948731s" Sep 11 00:32:29.943700 containerd[1555]: time="2025-09-11T00:32:29.943696821Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 11 00:32:29.944875 containerd[1555]: time="2025-09-11T00:32:29.944833792Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 11 00:32:29.946192 containerd[1555]: time="2025-09-11T00:32:29.946162365Z" level=info msg="CreateContainer within sandbox \"499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 11 00:32:29.956080 containerd[1555]: time="2025-09-11T00:32:29.956025469Z" level=info msg="Container c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:29.963562 containerd[1555]: time="2025-09-11T00:32:29.963524392Z" level=info msg="CreateContainer within sandbox \"499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\"" Sep 11 00:32:29.964012 containerd[1555]: time="2025-09-11T00:32:29.963977980Z" level=info msg="StartContainer for \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\"" Sep 11 00:32:29.964934 containerd[1555]: time="2025-09-11T00:32:29.964902308Z" level=info msg="connecting to shim c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb" address="unix:///run/containerd/s/845023e1dddfaec4c98e9bddd74df5e34ecddeb79443c0454010edf243abe793" protocol=ttrpc version=3 Sep 11 00:32:29.986428 systemd[1]: Started cri-containerd-c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb.scope - libcontainer container c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb. Sep 11 00:32:30.088548 containerd[1555]: time="2025-09-11T00:32:30.088489014Z" level=info msg="StartContainer for \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\" returns successfully" Sep 11 00:32:30.247145 kubelet[2719]: E0911 00:32:30.247005 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:31.249148 kubelet[2719]: E0911 00:32:31.249095 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:32.034121 kubelet[2719]: E0911 00:32:32.034057 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:32.047521 kubelet[2719]: I0911 00:32:32.047459 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kn5zf" podStartSLOduration=3.239009493 podStartE2EDuration="5.047439286s" podCreationTimestamp="2025-09-11 00:32:27 +0000 UTC" firstStartedPulling="2025-09-11 00:32:28.136126694 +0000 UTC m=+6.026047697" lastFinishedPulling="2025-09-11 00:32:29.944556487 +0000 UTC m=+7.834477490" observedRunningTime="2025-09-11 00:32:30.456127668 +0000 UTC m=+8.346048671" watchObservedRunningTime="2025-09-11 00:32:32.047439286 +0000 UTC m=+9.937360289" Sep 11 00:32:32.251511 kubelet[2719]: E0911 00:32:32.251464 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:33.436359 kubelet[2719]: E0911 00:32:33.436294 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:38.730847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775312047.mount: Deactivated successfully. Sep 11 00:32:40.905017 containerd[1555]: time="2025-09-11T00:32:40.904935811Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:40.905732 containerd[1555]: time="2025-09-11T00:32:40.905694329Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 11 00:32:40.906887 containerd[1555]: time="2025-09-11T00:32:40.906849585Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:40.908311 containerd[1555]: time="2025-09-11T00:32:40.908244051Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.963382708s" Sep 11 00:32:40.908356 containerd[1555]: time="2025-09-11T00:32:40.908313712Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 11 00:32:40.911729 containerd[1555]: time="2025-09-11T00:32:40.911682396Z" level=info msg="CreateContainer within sandbox \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 11 00:32:40.918464 containerd[1555]: time="2025-09-11T00:32:40.918396629Z" level=info msg="Container cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:40.922131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146669882.mount: Deactivated successfully. Sep 11 00:32:40.924287 containerd[1555]: time="2025-09-11T00:32:40.924243729Z" level=info msg="CreateContainer within sandbox \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\"" Sep 11 00:32:40.924856 containerd[1555]: time="2025-09-11T00:32:40.924823651Z" level=info msg="StartContainer for \"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\"" Sep 11 00:32:40.925662 containerd[1555]: time="2025-09-11T00:32:40.925621123Z" level=info msg="connecting to shim cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7" address="unix:///run/containerd/s/52c9c9234dffeabc9f56b5f92f2a9ea121ad8781b18068464f6f0dd1651a9e94" protocol=ttrpc version=3 Sep 11 00:32:40.956451 systemd[1]: Started cri-containerd-cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7.scope - libcontainer container cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7. Sep 11 00:32:41.009388 systemd[1]: cri-containerd-cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7.scope: Deactivated successfully. Sep 11 00:32:41.152078 containerd[1555]: time="2025-09-11T00:32:41.152022559Z" level=info msg="StartContainer for \"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\" returns successfully" Sep 11 00:32:41.175293 containerd[1555]: time="2025-09-11T00:32:41.175224087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\" id:\"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\" pid:3207 exited_at:{seconds:1757550761 nanos:12022965}" Sep 11 00:32:41.176086 containerd[1555]: time="2025-09-11T00:32:41.176039722Z" level=info msg="received exit event container_id:\"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\" id:\"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\" pid:3207 exited_at:{seconds:1757550761 nanos:12022965}" Sep 11 00:32:41.214254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7-rootfs.mount: Deactivated successfully. Sep 11 00:32:41.268089 kubelet[2719]: E0911 00:32:41.268046 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:42.270825 kubelet[2719]: E0911 00:32:42.270788 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:42.272920 containerd[1555]: time="2025-09-11T00:32:42.272880393Z" level=info msg="CreateContainer within sandbox \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 11 00:32:42.285620 containerd[1555]: time="2025-09-11T00:32:42.285564274Z" level=info msg="Container 9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:42.310922 containerd[1555]: time="2025-09-11T00:32:42.310873748Z" level=info msg="CreateContainer within sandbox \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\"" Sep 11 00:32:42.311356 containerd[1555]: time="2025-09-11T00:32:42.311318806Z" level=info msg="StartContainer for \"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\"" Sep 11 00:32:42.312224 containerd[1555]: time="2025-09-11T00:32:42.312198091Z" level=info msg="connecting to shim 9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db" address="unix:///run/containerd/s/52c9c9234dffeabc9f56b5f92f2a9ea121ad8781b18068464f6f0dd1651a9e94" protocol=ttrpc version=3 Sep 11 00:32:42.337432 systemd[1]: Started cri-containerd-9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db.scope - libcontainer container 9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db. Sep 11 00:32:42.369879 containerd[1555]: time="2025-09-11T00:32:42.369835428Z" level=info msg="StartContainer for \"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\" returns successfully" Sep 11 00:32:42.384121 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:32:42.384384 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:32:42.384971 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:32:42.387166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:32:42.389811 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:32:42.390224 systemd[1]: cri-containerd-9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db.scope: Deactivated successfully. Sep 11 00:32:42.401841 containerd[1555]: time="2025-09-11T00:32:42.401787434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\" id:\"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\" pid:3255 exited_at:{seconds:1757550762 nanos:391097174}" Sep 11 00:32:42.403053 containerd[1555]: time="2025-09-11T00:32:42.403009603Z" level=info msg="received exit event container_id:\"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\" id:\"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\" pid:3255 exited_at:{seconds:1757550762 nanos:391097174}" Sep 11 00:32:42.422627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:32:43.284238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db-rootfs.mount: Deactivated successfully. Sep 11 00:32:43.285268 kubelet[2719]: E0911 00:32:43.284253 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:43.288813 containerd[1555]: time="2025-09-11T00:32:43.288738343Z" level=info msg="CreateContainer within sandbox \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 11 00:32:43.302632 containerd[1555]: time="2025-09-11T00:32:43.302578745Z" level=info msg="Container c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:43.306353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687703970.mount: Deactivated successfully. Sep 11 00:32:43.330197 containerd[1555]: time="2025-09-11T00:32:43.330144424Z" level=info msg="CreateContainer within sandbox \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\"" Sep 11 00:32:43.330716 containerd[1555]: time="2025-09-11T00:32:43.330670925Z" level=info msg="StartContainer for \"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\"" Sep 11 00:32:43.332158 containerd[1555]: time="2025-09-11T00:32:43.332135350Z" level=info msg="connecting to shim c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd" address="unix:///run/containerd/s/52c9c9234dffeabc9f56b5f92f2a9ea121ad8781b18068464f6f0dd1651a9e94" protocol=ttrpc version=3 Sep 11 00:32:43.354473 systemd[1]: Started cri-containerd-c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd.scope - libcontainer container c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd. Sep 11 00:32:43.399788 systemd[1]: cri-containerd-c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd.scope: Deactivated successfully. Sep 11 00:32:43.403051 containerd[1555]: time="2025-09-11T00:32:43.403000601Z" level=info msg="StartContainer for \"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\" returns successfully" Sep 11 00:32:43.403666 containerd[1555]: time="2025-09-11T00:32:43.403620387Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\" id:\"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\" pid:3304 exited_at:{seconds:1757550763 nanos:403207521}" Sep 11 00:32:43.403810 containerd[1555]: time="2025-09-11T00:32:43.403733540Z" level=info msg="received exit event container_id:\"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\" id:\"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\" pid:3304 exited_at:{seconds:1757550763 nanos:403207521}" Sep 11 00:32:43.428935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd-rootfs.mount: Deactivated successfully. Sep 11 00:32:44.288723 kubelet[2719]: E0911 00:32:44.288651 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:44.297335 containerd[1555]: time="2025-09-11T00:32:44.293701141Z" level=info msg="CreateContainer within sandbox \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 11 00:32:44.305724 containerd[1555]: time="2025-09-11T00:32:44.305656732Z" level=info msg="Container e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:44.311993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835835246.mount: Deactivated successfully. Sep 11 00:32:44.317540 containerd[1555]: time="2025-09-11T00:32:44.317474964Z" level=info msg="CreateContainer within sandbox \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\"" Sep 11 00:32:44.318046 containerd[1555]: time="2025-09-11T00:32:44.318012145Z" level=info msg="StartContainer for \"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\"" Sep 11 00:32:44.318977 containerd[1555]: time="2025-09-11T00:32:44.318856143Z" level=info msg="connecting to shim e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90" address="unix:///run/containerd/s/52c9c9234dffeabc9f56b5f92f2a9ea121ad8781b18068464f6f0dd1651a9e94" protocol=ttrpc version=3 Sep 11 00:32:44.339587 systemd[1]: Started cri-containerd-e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90.scope - libcontainer container e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90. Sep 11 00:32:44.371163 systemd[1]: cri-containerd-e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90.scope: Deactivated successfully. Sep 11 00:32:44.372169 containerd[1555]: time="2025-09-11T00:32:44.372113661Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\" id:\"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\" pid:3343 exited_at:{seconds:1757550764 nanos:371720362}" Sep 11 00:32:44.373635 containerd[1555]: time="2025-09-11T00:32:44.373603143Z" level=info msg="received exit event container_id:\"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\" id:\"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\" pid:3343 exited_at:{seconds:1757550764 nanos:371720362}" Sep 11 00:32:44.381234 containerd[1555]: time="2025-09-11T00:32:44.381191428Z" level=info msg="StartContainer for \"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\" returns successfully" Sep 11 00:32:44.395096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90-rootfs.mount: Deactivated successfully. Sep 11 00:32:45.293719 kubelet[2719]: E0911 00:32:45.293678 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:45.295718 containerd[1555]: time="2025-09-11T00:32:45.295395532Z" level=info msg="CreateContainer within sandbox \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 11 00:32:45.310827 containerd[1555]: time="2025-09-11T00:32:45.310776420Z" level=info msg="Container 509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:45.318871 containerd[1555]: time="2025-09-11T00:32:45.318834345Z" level=info msg="CreateContainer within sandbox \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\"" Sep 11 00:32:45.319375 containerd[1555]: time="2025-09-11T00:32:45.319347571Z" level=info msg="StartContainer for \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\"" Sep 11 00:32:45.320111 containerd[1555]: time="2025-09-11T00:32:45.320078555Z" level=info msg="connecting to shim 509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9" address="unix:///run/containerd/s/52c9c9234dffeabc9f56b5f92f2a9ea121ad8781b18068464f6f0dd1651a9e94" protocol=ttrpc version=3 Sep 11 00:32:45.350435 systemd[1]: Started cri-containerd-509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9.scope - libcontainer container 509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9. Sep 11 00:32:45.389587 containerd[1555]: time="2025-09-11T00:32:45.389537625Z" level=info msg="StartContainer for \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" returns successfully" Sep 11 00:32:45.485276 containerd[1555]: time="2025-09-11T00:32:45.485233241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" id:\"312174475706ce1908976b9c57ac74e998d1ed963a4fdf1afa52f67601f6eaba\" pid:3412 exited_at:{seconds:1757550765 nanos:484640567}" Sep 11 00:32:45.563753 kubelet[2719]: I0911 00:32:45.563657 2719 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 00:32:45.593973 systemd[1]: Created slice kubepods-burstable-pod06a71062_6af8_4cb2_a351_2ade5ce411e8.slice - libcontainer container kubepods-burstable-pod06a71062_6af8_4cb2_a351_2ade5ce411e8.slice. Sep 11 00:32:45.603443 systemd[1]: Created slice kubepods-burstable-pod5165e813_e851_4edd_9821_2d169d5446fa.slice - libcontainer container kubepods-burstable-pod5165e813_e851_4edd_9821_2d169d5446fa.slice. Sep 11 00:32:45.659672 kubelet[2719]: I0911 00:32:45.659625 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06a71062-6af8-4cb2-a351-2ade5ce411e8-config-volume\") pod \"coredns-668d6bf9bc-znpmx\" (UID: \"06a71062-6af8-4cb2-a351-2ade5ce411e8\") " pod="kube-system/coredns-668d6bf9bc-znpmx" Sep 11 00:32:45.659672 kubelet[2719]: I0911 00:32:45.659665 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppg4t\" (UniqueName: \"kubernetes.io/projected/06a71062-6af8-4cb2-a351-2ade5ce411e8-kube-api-access-ppg4t\") pod \"coredns-668d6bf9bc-znpmx\" (UID: \"06a71062-6af8-4cb2-a351-2ade5ce411e8\") " pod="kube-system/coredns-668d6bf9bc-znpmx" Sep 11 00:32:45.659672 kubelet[2719]: I0911 00:32:45.659685 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5165e813-e851-4edd-9821-2d169d5446fa-config-volume\") pod \"coredns-668d6bf9bc-bzxnf\" (UID: \"5165e813-e851-4edd-9821-2d169d5446fa\") " pod="kube-system/coredns-668d6bf9bc-bzxnf" Sep 11 00:32:45.659672 kubelet[2719]: I0911 00:32:45.659705 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvxw8\" (UniqueName: \"kubernetes.io/projected/5165e813-e851-4edd-9821-2d169d5446fa-kube-api-access-qvxw8\") pod \"coredns-668d6bf9bc-bzxnf\" (UID: \"5165e813-e851-4edd-9821-2d169d5446fa\") " pod="kube-system/coredns-668d6bf9bc-bzxnf" Sep 11 00:32:45.899039 kubelet[2719]: E0911 00:32:45.898686 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:45.899722 containerd[1555]: time="2025-09-11T00:32:45.899686423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-znpmx,Uid:06a71062-6af8-4cb2-a351-2ade5ce411e8,Namespace:kube-system,Attempt:0,}" Sep 11 00:32:45.906908 kubelet[2719]: E0911 00:32:45.906868 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:45.907520 containerd[1555]: time="2025-09-11T00:32:45.907460765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bzxnf,Uid:5165e813-e851-4edd-9821-2d169d5446fa,Namespace:kube-system,Attempt:0,}" Sep 11 00:32:46.299407 kubelet[2719]: E0911 00:32:46.299370 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:46.314884 kubelet[2719]: I0911 00:32:46.314827 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7zl7d" podStartSLOduration=6.634589311 podStartE2EDuration="19.314806806s" podCreationTimestamp="2025-09-11 00:32:27 +0000 UTC" firstStartedPulling="2025-09-11 00:32:28.228899169 +0000 UTC m=+6.118820172" lastFinishedPulling="2025-09-11 00:32:40.909116664 +0000 UTC m=+18.799037667" observedRunningTime="2025-09-11 00:32:46.313530556 +0000 UTC m=+24.203451559" watchObservedRunningTime="2025-09-11 00:32:46.314806806 +0000 UTC m=+24.204727809" Sep 11 00:32:47.301702 kubelet[2719]: E0911 00:32:47.301640 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:47.578229 systemd-networkd[1468]: cilium_host: Link UP Sep 11 00:32:47.578711 systemd-networkd[1468]: cilium_net: Link UP Sep 11 00:32:47.578904 systemd-networkd[1468]: cilium_net: Gained carrier Sep 11 00:32:47.579076 systemd-networkd[1468]: cilium_host: Gained carrier Sep 11 00:32:47.684992 systemd-networkd[1468]: cilium_vxlan: Link UP Sep 11 00:32:47.685005 systemd-networkd[1468]: cilium_vxlan: Gained carrier Sep 11 00:32:47.765447 systemd-networkd[1468]: cilium_net: Gained IPv6LL Sep 11 00:32:47.904341 kernel: NET: Registered PF_ALG protocol family Sep 11 00:32:48.302874 kubelet[2719]: E0911 00:32:48.302836 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:48.533508 systemd-networkd[1468]: cilium_host: Gained IPv6LL Sep 11 00:32:48.577437 systemd-networkd[1468]: lxc_health: Link UP Sep 11 00:32:48.579055 systemd-networkd[1468]: lxc_health: Gained carrier Sep 11 00:32:48.993756 kernel: eth0: renamed from tmp08336 Sep 11 00:32:48.994947 systemd-networkd[1468]: lxc73fccead4767: Link UP Sep 11 00:32:48.995962 systemd-networkd[1468]: lxc73fccead4767: Gained carrier Sep 11 00:32:48.998410 systemd-networkd[1468]: lxcee7984bdfeeb: Link UP Sep 11 00:32:49.005376 kernel: eth0: renamed from tmp1cbc2 Sep 11 00:32:49.006453 systemd-networkd[1468]: lxcee7984bdfeeb: Gained carrier Sep 11 00:32:49.621689 systemd-networkd[1468]: cilium_vxlan: Gained IPv6LL Sep 11 00:32:50.069506 systemd-networkd[1468]: lxc_health: Gained IPv6LL Sep 11 00:32:50.147320 kubelet[2719]: E0911 00:32:50.147266 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:50.198477 systemd-networkd[1468]: lxc73fccead4767: Gained IPv6LL Sep 11 00:32:50.306566 kubelet[2719]: E0911 00:32:50.306526 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:50.709573 systemd-networkd[1468]: lxcee7984bdfeeb: Gained IPv6LL Sep 11 00:32:51.269733 systemd[1]: Started sshd@9-10.0.0.151:22-10.0.0.1:46434.service - OpenSSH per-connection server daemon (10.0.0.1:46434). Sep 11 00:32:51.308084 kubelet[2719]: E0911 00:32:51.308054 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:51.327428 sshd[3883]: Accepted publickey for core from 10.0.0.1 port 46434 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:51.329121 sshd-session[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:51.333936 systemd-logind[1539]: New session 10 of user core. Sep 11 00:32:51.341459 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:32:51.538679 sshd[3885]: Connection closed by 10.0.0.1 port 46434 Sep 11 00:32:51.539530 sshd-session[3883]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:51.544367 systemd[1]: sshd@9-10.0.0.151:22-10.0.0.1:46434.service: Deactivated successfully. Sep 11 00:32:51.546634 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:32:51.547527 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:32:51.549214 systemd-logind[1539]: Removed session 10. Sep 11 00:32:52.319926 containerd[1555]: time="2025-09-11T00:32:52.319868209Z" level=info msg="connecting to shim 08336a30898478ce26ae7ba8efc954d8fc9347078f639aac62a338d9231aa3b0" address="unix:///run/containerd/s/c842612adde4753b8a00e3ddec45b36d76ed42fccb9f6f1b437e907409fa0b19" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:32:52.326200 containerd[1555]: time="2025-09-11T00:32:52.325997694Z" level=info msg="connecting to shim 1cbc2cabfddcc0dd4b63afab724968b34870368c83bca603f17b8300334ef681" address="unix:///run/containerd/s/88777b025ab844f15cf9af024f8b72666a1adcabfdb5e82941d0a944fc559ca1" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:32:52.360473 systemd[1]: Started cri-containerd-08336a30898478ce26ae7ba8efc954d8fc9347078f639aac62a338d9231aa3b0.scope - libcontainer container 08336a30898478ce26ae7ba8efc954d8fc9347078f639aac62a338d9231aa3b0. Sep 11 00:32:52.362464 systemd[1]: Started cri-containerd-1cbc2cabfddcc0dd4b63afab724968b34870368c83bca603f17b8300334ef681.scope - libcontainer container 1cbc2cabfddcc0dd4b63afab724968b34870368c83bca603f17b8300334ef681. Sep 11 00:32:52.376935 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:32:52.379508 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:32:52.411579 containerd[1555]: time="2025-09-11T00:32:52.411538880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-znpmx,Uid:06a71062-6af8-4cb2-a351-2ade5ce411e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cbc2cabfddcc0dd4b63afab724968b34870368c83bca603f17b8300334ef681\"" Sep 11 00:32:52.412825 kubelet[2719]: E0911 00:32:52.412579 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:52.414225 containerd[1555]: time="2025-09-11T00:32:52.414138294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bzxnf,Uid:5165e813-e851-4edd-9821-2d169d5446fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"08336a30898478ce26ae7ba8efc954d8fc9347078f639aac62a338d9231aa3b0\"" Sep 11 00:32:52.414955 containerd[1555]: time="2025-09-11T00:32:52.414899986Z" level=info msg="CreateContainer within sandbox \"1cbc2cabfddcc0dd4b63afab724968b34870368c83bca603f17b8300334ef681\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:32:52.415427 kubelet[2719]: E0911 00:32:52.415286 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:52.417850 containerd[1555]: time="2025-09-11T00:32:52.417825240Z" level=info msg="CreateContainer within sandbox \"08336a30898478ce26ae7ba8efc954d8fc9347078f639aac62a338d9231aa3b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:32:52.429817 containerd[1555]: time="2025-09-11T00:32:52.429786392Z" level=info msg="Container 31c256aa2e0373c86e7328fa97def093a5f2747354a5036f1ced02af76317217: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:52.432204 containerd[1555]: time="2025-09-11T00:32:52.432156726Z" level=info msg="Container e05a845a2c099dde103b8334ad332d2c1de977560ecc396ef556c601df04666a: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:52.439486 containerd[1555]: time="2025-09-11T00:32:52.439442353Z" level=info msg="CreateContainer within sandbox \"08336a30898478ce26ae7ba8efc954d8fc9347078f639aac62a338d9231aa3b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e05a845a2c099dde103b8334ad332d2c1de977560ecc396ef556c601df04666a\"" Sep 11 00:32:52.440964 containerd[1555]: time="2025-09-11T00:32:52.440800535Z" level=info msg="StartContainer for \"e05a845a2c099dde103b8334ad332d2c1de977560ecc396ef556c601df04666a\"" Sep 11 00:32:52.441292 containerd[1555]: time="2025-09-11T00:32:52.441260729Z" level=info msg="CreateContainer within sandbox \"1cbc2cabfddcc0dd4b63afab724968b34870368c83bca603f17b8300334ef681\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"31c256aa2e0373c86e7328fa97def093a5f2747354a5036f1ced02af76317217\"" Sep 11 00:32:52.441778 containerd[1555]: time="2025-09-11T00:32:52.441750479Z" level=info msg="StartContainer for \"31c256aa2e0373c86e7328fa97def093a5f2747354a5036f1ced02af76317217\"" Sep 11 00:32:52.441891 containerd[1555]: time="2025-09-11T00:32:52.441750830Z" level=info msg="connecting to shim e05a845a2c099dde103b8334ad332d2c1de977560ecc396ef556c601df04666a" address="unix:///run/containerd/s/c842612adde4753b8a00e3ddec45b36d76ed42fccb9f6f1b437e907409fa0b19" protocol=ttrpc version=3 Sep 11 00:32:52.442589 containerd[1555]: time="2025-09-11T00:32:52.442564398Z" level=info msg="connecting to shim 31c256aa2e0373c86e7328fa97def093a5f2747354a5036f1ced02af76317217" address="unix:///run/containerd/s/88777b025ab844f15cf9af024f8b72666a1adcabfdb5e82941d0a944fc559ca1" protocol=ttrpc version=3 Sep 11 00:32:52.477464 systemd[1]: Started cri-containerd-31c256aa2e0373c86e7328fa97def093a5f2747354a5036f1ced02af76317217.scope - libcontainer container 31c256aa2e0373c86e7328fa97def093a5f2747354a5036f1ced02af76317217. Sep 11 00:32:52.478736 systemd[1]: Started cri-containerd-e05a845a2c099dde103b8334ad332d2c1de977560ecc396ef556c601df04666a.scope - libcontainer container e05a845a2c099dde103b8334ad332d2c1de977560ecc396ef556c601df04666a. Sep 11 00:32:52.515226 containerd[1555]: time="2025-09-11T00:32:52.515176315Z" level=info msg="StartContainer for \"e05a845a2c099dde103b8334ad332d2c1de977560ecc396ef556c601df04666a\" returns successfully" Sep 11 00:32:52.515481 containerd[1555]: time="2025-09-11T00:32:52.515446963Z" level=info msg="StartContainer for \"31c256aa2e0373c86e7328fa97def093a5f2747354a5036f1ced02af76317217\" returns successfully" Sep 11 00:32:53.305714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460636956.mount: Deactivated successfully. Sep 11 00:32:53.322055 kubelet[2719]: E0911 00:32:53.321769 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:53.324320 kubelet[2719]: E0911 00:32:53.324161 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:53.334918 kubelet[2719]: I0911 00:32:53.334842 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-znpmx" podStartSLOduration=26.334824824000002 podStartE2EDuration="26.334824824s" podCreationTimestamp="2025-09-11 00:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:32:53.333581309 +0000 UTC m=+31.223502322" watchObservedRunningTime="2025-09-11 00:32:53.334824824 +0000 UTC m=+31.224745827" Sep 11 00:32:54.326062 kubelet[2719]: E0911 00:32:54.326020 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:54.326644 kubelet[2719]: E0911 00:32:54.326619 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:55.328087 kubelet[2719]: E0911 00:32:55.328051 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:55.328574 kubelet[2719]: E0911 00:32:55.328210 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:32:56.555713 systemd[1]: Started sshd@10-10.0.0.151:22-10.0.0.1:46436.service - OpenSSH per-connection server daemon (10.0.0.1:46436). Sep 11 00:32:56.596569 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 46436 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:56.598110 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:56.603039 systemd-logind[1539]: New session 11 of user core. Sep 11 00:32:56.613434 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:32:56.745464 sshd[4071]: Connection closed by 10.0.0.1 port 46436 Sep 11 00:32:56.745812 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:56.750836 systemd[1]: sshd@10-10.0.0.151:22-10.0.0.1:46436.service: Deactivated successfully. Sep 11 00:32:56.753053 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:32:56.753869 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:32:56.755204 systemd-logind[1539]: Removed session 11. Sep 11 00:33:01.763239 systemd[1]: Started sshd@11-10.0.0.151:22-10.0.0.1:47112.service - OpenSSH per-connection server daemon (10.0.0.1:47112). Sep 11 00:33:01.811705 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 47112 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:01.813144 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:01.817491 systemd-logind[1539]: New session 12 of user core. Sep 11 00:33:01.832442 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:33:01.945641 sshd[4089]: Connection closed by 10.0.0.1 port 47112 Sep 11 00:33:01.945967 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:01.950736 systemd[1]: sshd@11-10.0.0.151:22-10.0.0.1:47112.service: Deactivated successfully. Sep 11 00:33:01.952954 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:33:01.953857 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:33:01.955176 systemd-logind[1539]: Removed session 12. Sep 11 00:33:06.964388 systemd[1]: Started sshd@12-10.0.0.151:22-10.0.0.1:47120.service - OpenSSH per-connection server daemon (10.0.0.1:47120). Sep 11 00:33:07.002705 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 47120 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:07.004221 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:07.008584 systemd-logind[1539]: New session 13 of user core. Sep 11 00:33:07.013455 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:33:07.128119 sshd[4107]: Connection closed by 10.0.0.1 port 47120 Sep 11 00:33:07.128574 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:07.137149 systemd[1]: sshd@12-10.0.0.151:22-10.0.0.1:47120.service: Deactivated successfully. Sep 11 00:33:07.139331 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:33:07.140275 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:33:07.143904 systemd[1]: Started sshd@13-10.0.0.151:22-10.0.0.1:47132.service - OpenSSH per-connection server daemon (10.0.0.1:47132). Sep 11 00:33:07.144629 systemd-logind[1539]: Removed session 13. Sep 11 00:33:07.195008 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 47132 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:07.196542 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:07.200867 systemd-logind[1539]: New session 14 of user core. Sep 11 00:33:07.216416 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:33:07.371459 sshd[4124]: Connection closed by 10.0.0.1 port 47132 Sep 11 00:33:07.371982 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:07.386250 systemd[1]: sshd@13-10.0.0.151:22-10.0.0.1:47132.service: Deactivated successfully. Sep 11 00:33:07.392545 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:33:07.394616 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:33:07.398571 systemd[1]: Started sshd@14-10.0.0.151:22-10.0.0.1:47136.service - OpenSSH per-connection server daemon (10.0.0.1:47136). Sep 11 00:33:07.400723 systemd-logind[1539]: Removed session 14. Sep 11 00:33:07.450079 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 47136 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:07.451562 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:07.456001 systemd-logind[1539]: New session 15 of user core. Sep 11 00:33:07.463415 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:33:07.577005 sshd[4137]: Connection closed by 10.0.0.1 port 47136 Sep 11 00:33:07.577271 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:07.582564 systemd[1]: sshd@14-10.0.0.151:22-10.0.0.1:47136.service: Deactivated successfully. Sep 11 00:33:07.584785 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:33:07.585631 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:33:07.587004 systemd-logind[1539]: Removed session 15. Sep 11 00:33:12.593563 systemd[1]: Started sshd@15-10.0.0.151:22-10.0.0.1:59894.service - OpenSSH per-connection server daemon (10.0.0.1:59894). Sep 11 00:33:12.649287 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 59894 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:12.650787 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:12.655467 systemd-logind[1539]: New session 16 of user core. Sep 11 00:33:12.667445 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:33:12.784671 sshd[4152]: Connection closed by 10.0.0.1 port 59894 Sep 11 00:33:12.785014 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:12.789787 systemd[1]: sshd@15-10.0.0.151:22-10.0.0.1:59894.service: Deactivated successfully. Sep 11 00:33:12.792037 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:33:12.793125 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:33:12.794402 systemd-logind[1539]: Removed session 16. Sep 11 00:33:17.797404 systemd[1]: Started sshd@16-10.0.0.151:22-10.0.0.1:59900.service - OpenSSH per-connection server daemon (10.0.0.1:59900). Sep 11 00:33:17.841954 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 59900 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:17.843419 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:17.847711 systemd-logind[1539]: New session 17 of user core. Sep 11 00:33:17.860446 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:33:17.975131 sshd[4167]: Connection closed by 10.0.0.1 port 59900 Sep 11 00:33:17.975497 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:17.988103 systemd[1]: sshd@16-10.0.0.151:22-10.0.0.1:59900.service: Deactivated successfully. Sep 11 00:33:17.990236 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:33:17.991088 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:33:17.994386 systemd[1]: Started sshd@17-10.0.0.151:22-10.0.0.1:59906.service - OpenSSH per-connection server daemon (10.0.0.1:59906). Sep 11 00:33:17.994984 systemd-logind[1539]: Removed session 17. Sep 11 00:33:18.047193 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 59906 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:18.048890 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:18.054084 systemd-logind[1539]: New session 18 of user core. Sep 11 00:33:18.065456 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:33:18.247907 sshd[4183]: Connection closed by 10.0.0.1 port 59906 Sep 11 00:33:18.248371 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:18.264119 systemd[1]: sshd@17-10.0.0.151:22-10.0.0.1:59906.service: Deactivated successfully. Sep 11 00:33:18.266156 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:33:18.266966 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:33:18.270201 systemd[1]: Started sshd@18-10.0.0.151:22-10.0.0.1:59920.service - OpenSSH per-connection server daemon (10.0.0.1:59920). Sep 11 00:33:18.270956 systemd-logind[1539]: Removed session 18. Sep 11 00:33:18.323311 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 59920 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:18.324646 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:18.329195 systemd-logind[1539]: New session 19 of user core. Sep 11 00:33:18.340445 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:33:18.816773 sshd[4196]: Connection closed by 10.0.0.1 port 59920 Sep 11 00:33:18.817523 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:18.832252 systemd[1]: sshd@18-10.0.0.151:22-10.0.0.1:59920.service: Deactivated successfully. Sep 11 00:33:18.836240 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:33:18.839046 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:33:18.841488 systemd[1]: Started sshd@19-10.0.0.151:22-10.0.0.1:59936.service - OpenSSH per-connection server daemon (10.0.0.1:59936). Sep 11 00:33:18.842713 systemd-logind[1539]: Removed session 19. Sep 11 00:33:18.885422 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 59936 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:18.886845 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:18.891232 systemd-logind[1539]: New session 20 of user core. Sep 11 00:33:18.906436 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:33:19.165980 sshd[4216]: Connection closed by 10.0.0.1 port 59936 Sep 11 00:33:19.166337 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:19.177229 systemd[1]: sshd@19-10.0.0.151:22-10.0.0.1:59936.service: Deactivated successfully. Sep 11 00:33:19.179288 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:33:19.182609 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:33:19.185929 systemd[1]: Started sshd@20-10.0.0.151:22-10.0.0.1:59942.service - OpenSSH per-connection server daemon (10.0.0.1:59942). Sep 11 00:33:19.186572 systemd-logind[1539]: Removed session 20. Sep 11 00:33:19.236324 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 59942 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:19.237845 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:19.242427 systemd-logind[1539]: New session 21 of user core. Sep 11 00:33:19.249459 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:33:19.362731 sshd[4229]: Connection closed by 10.0.0.1 port 59942 Sep 11 00:33:19.363121 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:19.367535 systemd[1]: sshd@20-10.0.0.151:22-10.0.0.1:59942.service: Deactivated successfully. Sep 11 00:33:19.370344 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:33:19.371162 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:33:19.374110 systemd-logind[1539]: Removed session 21. Sep 11 00:33:24.375927 systemd[1]: Started sshd@21-10.0.0.151:22-10.0.0.1:33494.service - OpenSSH per-connection server daemon (10.0.0.1:33494). Sep 11 00:33:24.432865 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 33494 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:24.434275 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:24.438613 systemd-logind[1539]: New session 22 of user core. Sep 11 00:33:24.451443 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:33:24.559668 sshd[4250]: Connection closed by 10.0.0.1 port 33494 Sep 11 00:33:24.559987 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:24.564544 systemd[1]: sshd@21-10.0.0.151:22-10.0.0.1:33494.service: Deactivated successfully. Sep 11 00:33:24.566652 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:33:24.567415 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:33:24.568692 systemd-logind[1539]: Removed session 22. Sep 11 00:33:29.572281 systemd[1]: Started sshd@22-10.0.0.151:22-10.0.0.1:33506.service - OpenSSH per-connection server daemon (10.0.0.1:33506). Sep 11 00:33:29.620223 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 33506 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:29.621810 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:29.626226 systemd-logind[1539]: New session 23 of user core. Sep 11 00:33:29.634431 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:33:29.743760 sshd[4267]: Connection closed by 10.0.0.1 port 33506 Sep 11 00:33:29.744082 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:29.748591 systemd[1]: sshd@22-10.0.0.151:22-10.0.0.1:33506.service: Deactivated successfully. Sep 11 00:33:29.750681 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:33:29.751503 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:33:29.752802 systemd-logind[1539]: Removed session 23. Sep 11 00:33:33.213445 kubelet[2719]: E0911 00:33:33.213394 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:33.213903 kubelet[2719]: E0911 00:33:33.213468 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:34.756469 systemd[1]: Started sshd@23-10.0.0.151:22-10.0.0.1:41064.service - OpenSSH per-connection server daemon (10.0.0.1:41064). Sep 11 00:33:34.803562 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 41064 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:34.804792 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:34.808837 systemd-logind[1539]: New session 24 of user core. Sep 11 00:33:34.818428 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:33:34.925985 sshd[4283]: Connection closed by 10.0.0.1 port 41064 Sep 11 00:33:34.926400 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:34.941116 systemd[1]: sshd@23-10.0.0.151:22-10.0.0.1:41064.service: Deactivated successfully. Sep 11 00:33:34.943124 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:33:34.943852 systemd-logind[1539]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:33:34.947085 systemd[1]: Started sshd@24-10.0.0.151:22-10.0.0.1:41078.service - OpenSSH per-connection server daemon (10.0.0.1:41078). Sep 11 00:33:34.947967 systemd-logind[1539]: Removed session 24. Sep 11 00:33:35.001940 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 41078 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:35.003486 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:35.007892 systemd-logind[1539]: New session 25 of user core. Sep 11 00:33:35.024452 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 11 00:33:36.340337 kubelet[2719]: I0911 00:33:36.340057 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bzxnf" podStartSLOduration=69.340035564 podStartE2EDuration="1m9.340035564s" podCreationTimestamp="2025-09-11 00:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:32:53.361189369 +0000 UTC m=+31.251110372" watchObservedRunningTime="2025-09-11 00:33:36.340035564 +0000 UTC m=+74.229956567" Sep 11 00:33:36.350746 containerd[1555]: time="2025-09-11T00:33:36.350639967Z" level=info msg="StopContainer for \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\" with timeout 30 (s)" Sep 11 00:33:36.359737 containerd[1555]: time="2025-09-11T00:33:36.359699969Z" level=info msg="Stop container \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\" with signal terminated" Sep 11 00:33:36.371856 systemd[1]: cri-containerd-c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb.scope: Deactivated successfully. Sep 11 00:33:36.373102 containerd[1555]: time="2025-09-11T00:33:36.373052880Z" level=info msg="received exit event container_id:\"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\" id:\"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\" pid:3144 exited_at:{seconds:1757550816 nanos:372674647}" Sep 11 00:33:36.373895 containerd[1555]: time="2025-09-11T00:33:36.373752465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\" id:\"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\" pid:3144 exited_at:{seconds:1757550816 nanos:372674647}" Sep 11 00:33:36.384236 containerd[1555]: time="2025-09-11T00:33:36.384182203Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:33:36.385987 containerd[1555]: time="2025-09-11T00:33:36.385954238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" id:\"d36e6539c49ce2431d53cb25df2e9a1ca54eda865f38510a945cbf520ca9e5b1\" pid:4320 exited_at:{seconds:1757550816 nanos:385649756}" Sep 11 00:33:36.393603 containerd[1555]: time="2025-09-11T00:33:36.393561145Z" level=info msg="StopContainer for \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" with timeout 2 (s)" Sep 11 00:33:36.393957 containerd[1555]: time="2025-09-11T00:33:36.393917556Z" level=info msg="Stop container \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" with signal terminated" Sep 11 00:33:36.402279 systemd-networkd[1468]: lxc_health: Link DOWN Sep 11 00:33:36.402289 systemd-networkd[1468]: lxc_health: Lost carrier Sep 11 00:33:36.402950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb-rootfs.mount: Deactivated successfully. Sep 11 00:33:36.422253 containerd[1555]: time="2025-09-11T00:33:36.422208174Z" level=info msg="StopContainer for \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\" returns successfully" Sep 11 00:33:36.423056 containerd[1555]: time="2025-09-11T00:33:36.423021857Z" level=info msg="StopPodSandbox for \"499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7\"" Sep 11 00:33:36.423174 containerd[1555]: time="2025-09-11T00:33:36.423139632Z" level=info msg="Container to stop \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:33:36.423701 systemd[1]: cri-containerd-509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9.scope: Deactivated successfully. Sep 11 00:33:36.424059 systemd[1]: cri-containerd-509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9.scope: Consumed 6.387s CPU time, 124M memory peak, 144K read from disk, 13.3M written to disk. Sep 11 00:33:36.425023 containerd[1555]: time="2025-09-11T00:33:36.424927556Z" level=info msg="received exit event container_id:\"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" id:\"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" pid:3380 exited_at:{seconds:1757550816 nanos:424529556}" Sep 11 00:33:36.425323 containerd[1555]: time="2025-09-11T00:33:36.425279088Z" level=info msg="TaskExit event in podsandbox handler container_id:\"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" id:\"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" pid:3380 exited_at:{seconds:1757550816 nanos:424529556}" Sep 11 00:33:36.432797 systemd[1]: cri-containerd-499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7.scope: Deactivated successfully. Sep 11 00:33:36.433580 containerd[1555]: time="2025-09-11T00:33:36.433502172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7\" id:\"499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7\" pid:2911 exit_status:137 exited_at:{seconds:1757550816 nanos:433246874}" Sep 11 00:33:36.450167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9-rootfs.mount: Deactivated successfully. Sep 11 00:33:36.459137 containerd[1555]: time="2025-09-11T00:33:36.459092894Z" level=info msg="StopContainer for \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" returns successfully" Sep 11 00:33:36.459986 containerd[1555]: time="2025-09-11T00:33:36.459933038Z" level=info msg="StopPodSandbox for \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\"" Sep 11 00:33:36.460073 containerd[1555]: time="2025-09-11T00:33:36.459999986Z" level=info msg="Container to stop \"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:33:36.460073 containerd[1555]: time="2025-09-11T00:33:36.460010797Z" level=info msg="Container to stop \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:33:36.460073 containerd[1555]: time="2025-09-11T00:33:36.460019443Z" level=info msg="Container to stop \"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:33:36.460073 containerd[1555]: time="2025-09-11T00:33:36.460027358Z" level=info msg="Container to stop \"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:33:36.460073 containerd[1555]: time="2025-09-11T00:33:36.460035173Z" level=info msg="Container to stop \"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:33:36.465092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7-rootfs.mount: Deactivated successfully. Sep 11 00:33:36.466778 systemd[1]: cri-containerd-e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8.scope: Deactivated successfully. Sep 11 00:33:36.469795 containerd[1555]: time="2025-09-11T00:33:36.469751319Z" level=info msg="shim disconnected" id=499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7 namespace=k8s.io Sep 11 00:33:36.469795 containerd[1555]: time="2025-09-11T00:33:36.469791836Z" level=warning msg="cleaning up after shim disconnected" id=499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7 namespace=k8s.io Sep 11 00:33:36.479021 containerd[1555]: time="2025-09-11T00:33:36.469800101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 11 00:33:36.491290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8-rootfs.mount: Deactivated successfully. Sep 11 00:33:36.496959 containerd[1555]: time="2025-09-11T00:33:36.496889134Z" level=info msg="shim disconnected" id=e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8 namespace=k8s.io Sep 11 00:33:36.496959 containerd[1555]: time="2025-09-11T00:33:36.496924552Z" level=warning msg="cleaning up after shim disconnected" id=e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8 namespace=k8s.io Sep 11 00:33:36.496959 containerd[1555]: time="2025-09-11T00:33:36.496932698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 11 00:33:36.499241 containerd[1555]: time="2025-09-11T00:33:36.499185400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" id:\"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" pid:2978 exit_status:137 exited_at:{seconds:1757550816 nanos:467435506}" Sep 11 00:33:36.502092 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8-shm.mount: Deactivated successfully. Sep 11 00:33:36.509632 containerd[1555]: time="2025-09-11T00:33:36.509582025Z" level=info msg="received exit event sandbox_id:\"499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7\" exit_status:137 exited_at:{seconds:1757550816 nanos:433246874}" Sep 11 00:33:36.511243 containerd[1555]: time="2025-09-11T00:33:36.510017637Z" level=info msg="received exit event sandbox_id:\"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" exit_status:137 exited_at:{seconds:1757550816 nanos:467435506}" Sep 11 00:33:36.511243 containerd[1555]: time="2025-09-11T00:33:36.510708746Z" level=info msg="TearDown network for sandbox \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" successfully" Sep 11 00:33:36.511243 containerd[1555]: time="2025-09-11T00:33:36.510756147Z" level=info msg="StopPodSandbox for \"e25ecb1e5ea7cd6bbfc7b73d60c0e00febafdaa6b6982a4a344e91def9d57ab8\" returns successfully" Sep 11 00:33:36.513585 containerd[1555]: time="2025-09-11T00:33:36.513390699Z" level=info msg="TearDown network for sandbox \"499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7\" successfully" Sep 11 00:33:36.514118 containerd[1555]: time="2025-09-11T00:33:36.513724335Z" level=info msg="StopPodSandbox for \"499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7\" returns successfully" Sep 11 00:33:36.564269 kubelet[2719]: I0911 00:33:36.564189 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-lib-modules\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564269 kubelet[2719]: I0911 00:33:36.564258 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms2s9\" (UniqueName: \"kubernetes.io/projected/66f39d01-e703-4de2-bdd6-245edf607477-kube-api-access-ms2s9\") pod \"66f39d01-e703-4de2-bdd6-245edf607477\" (UID: \"66f39d01-e703-4de2-bdd6-245edf607477\") " Sep 11 00:33:36.564269 kubelet[2719]: I0911 00:33:36.564278 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-xtables-lock\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564508 kubelet[2719]: I0911 00:33:36.564309 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-etc-cni-netd\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564508 kubelet[2719]: I0911 00:33:36.564328 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66f39d01-e703-4de2-bdd6-245edf607477-cilium-config-path\") pod \"66f39d01-e703-4de2-bdd6-245edf607477\" (UID: \"66f39d01-e703-4de2-bdd6-245edf607477\") " Sep 11 00:33:36.564508 kubelet[2719]: I0911 00:33:36.564347 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-host-proc-sys-kernel\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564508 kubelet[2719]: I0911 00:33:36.564364 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c32543b-f742-4168-a488-aa704f470137-clustermesh-secrets\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564508 kubelet[2719]: I0911 00:33:36.564354 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:33:36.564508 kubelet[2719]: I0911 00:33:36.564385 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w748l\" (UniqueName: \"kubernetes.io/projected/2c32543b-f742-4168-a488-aa704f470137-kube-api-access-w748l\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564653 kubelet[2719]: I0911 00:33:36.564402 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-hostproc\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564653 kubelet[2719]: I0911 00:33:36.564418 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-cni-path\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564653 kubelet[2719]: I0911 00:33:36.564435 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c32543b-f742-4168-a488-aa704f470137-hubble-tls\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564653 kubelet[2719]: I0911 00:33:36.564448 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-host-proc-sys-net\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564653 kubelet[2719]: I0911 00:33:36.564470 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c32543b-f742-4168-a488-aa704f470137-cilium-config-path\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564653 kubelet[2719]: I0911 00:33:36.564487 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-cilium-cgroup\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564862 kubelet[2719]: I0911 00:33:36.564501 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-bpf-maps\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564862 kubelet[2719]: I0911 00:33:36.564517 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-cilium-run\") pod \"2c32543b-f742-4168-a488-aa704f470137\" (UID: \"2c32543b-f742-4168-a488-aa704f470137\") " Sep 11 00:33:36.564862 kubelet[2719]: I0911 00:33:36.564570 2719 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.564862 kubelet[2719]: I0911 00:33:36.564616 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:33:36.564862 kubelet[2719]: I0911 00:33:36.564646 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:33:36.564862 kubelet[2719]: I0911 00:33:36.564649 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:33:36.565007 kubelet[2719]: I0911 00:33:36.564676 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:33:36.566952 kubelet[2719]: I0911 00:33:36.566900 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:33:36.567887 kubelet[2719]: I0911 00:33:36.567779 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-hostproc" (OuterVolumeSpecName: "hostproc") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:33:36.568928 kubelet[2719]: I0911 00:33:36.568890 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66f39d01-e703-4de2-bdd6-245edf607477-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "66f39d01-e703-4de2-bdd6-245edf607477" (UID: "66f39d01-e703-4de2-bdd6-245edf607477"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:33:36.569260 kubelet[2719]: I0911 00:33:36.569243 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-cni-path" (OuterVolumeSpecName: "cni-path") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:33:36.569369 kubelet[2719]: I0911 00:33:36.569355 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:33:36.569449 kubelet[2719]: I0911 00:33:36.569435 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:33:36.569980 kubelet[2719]: I0911 00:33:36.569944 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c32543b-f742-4168-a488-aa704f470137-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:33:36.571353 kubelet[2719]: I0911 00:33:36.571318 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c32543b-f742-4168-a488-aa704f470137-kube-api-access-w748l" (OuterVolumeSpecName: "kube-api-access-w748l") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "kube-api-access-w748l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:33:36.571973 kubelet[2719]: I0911 00:33:36.571931 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f39d01-e703-4de2-bdd6-245edf607477-kube-api-access-ms2s9" (OuterVolumeSpecName: "kube-api-access-ms2s9") pod "66f39d01-e703-4de2-bdd6-245edf607477" (UID: "66f39d01-e703-4de2-bdd6-245edf607477"). InnerVolumeSpecName "kube-api-access-ms2s9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:33:36.572113 kubelet[2719]: I0911 00:33:36.571987 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c32543b-f742-4168-a488-aa704f470137-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:33:36.573092 kubelet[2719]: I0911 00:33:36.573056 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c32543b-f742-4168-a488-aa704f470137-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2c32543b-f742-4168-a488-aa704f470137" (UID: "2c32543b-f742-4168-a488-aa704f470137"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:33:36.665307 kubelet[2719]: I0911 00:33:36.665268 2719 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665383 kubelet[2719]: I0911 00:33:36.665294 2719 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665383 kubelet[2719]: I0911 00:33:36.665322 2719 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c32543b-f742-4168-a488-aa704f470137-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665383 kubelet[2719]: I0911 00:33:36.665332 2719 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665383 kubelet[2719]: I0911 00:33:36.665341 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c32543b-f742-4168-a488-aa704f470137-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665383 kubelet[2719]: I0911 00:33:36.665350 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w748l\" (UniqueName: \"kubernetes.io/projected/2c32543b-f742-4168-a488-aa704f470137-kube-api-access-w748l\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665383 kubelet[2719]: I0911 00:33:36.665358 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665383 kubelet[2719]: I0911 00:33:36.665366 2719 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665383 kubelet[2719]: I0911 00:33:36.665373 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665583 kubelet[2719]: I0911 00:33:36.665381 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ms2s9\" (UniqueName: \"kubernetes.io/projected/66f39d01-e703-4de2-bdd6-245edf607477-kube-api-access-ms2s9\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665583 kubelet[2719]: I0911 00:33:36.665391 2719 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665583 kubelet[2719]: I0911 00:33:36.665399 2719 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665583 kubelet[2719]: I0911 00:33:36.665407 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66f39d01-e703-4de2-bdd6-245edf607477-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665583 kubelet[2719]: I0911 00:33:36.665417 2719 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c32543b-f742-4168-a488-aa704f470137-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:36.665583 kubelet[2719]: I0911 00:33:36.665425 2719 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c32543b-f742-4168-a488-aa704f470137-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:37.267816 kubelet[2719]: E0911 00:33:37.267772 2719 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 11 00:33:37.402487 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-499e579d7b7871cb376887b623198e9fae1e217f42e3f4f8ccfadc3162dbfff7-shm.mount: Deactivated successfully. Sep 11 00:33:37.404141 kubelet[2719]: I0911 00:33:37.403466 2719 scope.go:117] "RemoveContainer" containerID="509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9" Sep 11 00:33:37.402618 systemd[1]: var-lib-kubelet-pods-66f39d01\x2de703\x2d4de2\x2dbdd6\x2d245edf607477-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dms2s9.mount: Deactivated successfully. Sep 11 00:33:37.402697 systemd[1]: var-lib-kubelet-pods-2c32543b\x2df742\x2d4168\x2da488\x2daa704f470137-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw748l.mount: Deactivated successfully. Sep 11 00:33:37.402773 systemd[1]: var-lib-kubelet-pods-2c32543b\x2df742\x2d4168\x2da488\x2daa704f470137-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 11 00:33:37.402854 systemd[1]: var-lib-kubelet-pods-2c32543b\x2df742\x2d4168\x2da488\x2daa704f470137-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 11 00:33:37.405400 containerd[1555]: time="2025-09-11T00:33:37.405351404Z" level=info msg="RemoveContainer for \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\"" Sep 11 00:33:37.411812 containerd[1555]: time="2025-09-11T00:33:37.411706736Z" level=info msg="RemoveContainer for \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" returns successfully" Sep 11 00:33:37.413879 systemd[1]: Removed slice kubepods-besteffort-pod66f39d01_e703_4de2_bdd6_245edf607477.slice - libcontainer container kubepods-besteffort-pod66f39d01_e703_4de2_bdd6_245edf607477.slice. Sep 11 00:33:37.416872 kubelet[2719]: I0911 00:33:37.416841 2719 scope.go:117] "RemoveContainer" containerID="e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90" Sep 11 00:33:37.416976 systemd[1]: Removed slice kubepods-burstable-pod2c32543b_f742_4168_a488_aa704f470137.slice - libcontainer container kubepods-burstable-pod2c32543b_f742_4168_a488_aa704f470137.slice. Sep 11 00:33:37.417080 systemd[1]: kubepods-burstable-pod2c32543b_f742_4168_a488_aa704f470137.slice: Consumed 6.502s CPU time, 124.3M memory peak, 164K read from disk, 13.3M written to disk. Sep 11 00:33:37.418625 containerd[1555]: time="2025-09-11T00:33:37.418591898Z" level=info msg="RemoveContainer for \"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\"" Sep 11 00:33:37.423089 containerd[1555]: time="2025-09-11T00:33:37.423054878Z" level=info msg="RemoveContainer for \"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\" returns successfully" Sep 11 00:33:37.423252 kubelet[2719]: I0911 00:33:37.423216 2719 scope.go:117] "RemoveContainer" containerID="c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd" Sep 11 00:33:37.425548 containerd[1555]: time="2025-09-11T00:33:37.425475519Z" level=info msg="RemoveContainer for \"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\"" Sep 11 00:33:37.430317 containerd[1555]: time="2025-09-11T00:33:37.430268588Z" level=info msg="RemoveContainer for \"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\" returns successfully" Sep 11 00:33:37.430582 kubelet[2719]: I0911 00:33:37.430513 2719 scope.go:117] "RemoveContainer" containerID="9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db" Sep 11 00:33:37.431888 containerd[1555]: time="2025-09-11T00:33:37.431849967Z" level=info msg="RemoveContainer for \"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\"" Sep 11 00:33:37.451725 containerd[1555]: time="2025-09-11T00:33:37.451680571Z" level=info msg="RemoveContainer for \"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\" returns successfully" Sep 11 00:33:37.451942 kubelet[2719]: I0911 00:33:37.451907 2719 scope.go:117] "RemoveContainer" containerID="cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7" Sep 11 00:33:37.459623 containerd[1555]: time="2025-09-11T00:33:37.459575500Z" level=info msg="RemoveContainer for \"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\"" Sep 11 00:33:37.470792 containerd[1555]: time="2025-09-11T00:33:37.470749580Z" level=info msg="RemoveContainer for \"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\" returns successfully" Sep 11 00:33:37.471014 kubelet[2719]: I0911 00:33:37.470978 2719 scope.go:117] "RemoveContainer" containerID="509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9" Sep 11 00:33:37.471754 containerd[1555]: time="2025-09-11T00:33:37.471142621Z" level=error msg="ContainerStatus for \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\": not found" Sep 11 00:33:37.471901 kubelet[2719]: E0911 00:33:37.471865 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\": not found" containerID="509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9" Sep 11 00:33:37.471973 kubelet[2719]: I0911 00:33:37.471897 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9"} err="failed to get container status \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"509658e873df135b24c9f51e9be327ffe5361208867f44a7fd4be81a80cf47b9\": not found" Sep 11 00:33:37.471973 kubelet[2719]: I0911 00:33:37.471967 2719 scope.go:117] "RemoveContainer" containerID="e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90" Sep 11 00:33:37.472242 containerd[1555]: time="2025-09-11T00:33:37.472198577Z" level=error msg="ContainerStatus for \"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\": not found" Sep 11 00:33:37.472380 kubelet[2719]: E0911 00:33:37.472351 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\": not found" containerID="e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90" Sep 11 00:33:37.472380 kubelet[2719]: I0911 00:33:37.472372 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90"} err="failed to get container status \"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1eba03d55cfdc31f7a5cd1a4815bb10ca3d6ab0d8ff8d81e5be1e21e83eec90\": not found" Sep 11 00:33:37.472448 kubelet[2719]: I0911 00:33:37.472387 2719 scope.go:117] "RemoveContainer" containerID="c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd" Sep 11 00:33:37.472602 containerd[1555]: time="2025-09-11T00:33:37.472555408Z" level=error msg="ContainerStatus for \"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\": not found" Sep 11 00:33:37.472749 kubelet[2719]: E0911 00:33:37.472719 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\": not found" containerID="c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd" Sep 11 00:33:37.472785 kubelet[2719]: I0911 00:33:37.472754 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd"} err="failed to get container status \"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"c00b63910d2a7bdf80a48f15e2502bd099434a4ffacf814972c5a1832f2629dd\": not found" Sep 11 00:33:37.472812 kubelet[2719]: I0911 00:33:37.472784 2719 scope.go:117] "RemoveContainer" containerID="9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db" Sep 11 00:33:37.472991 containerd[1555]: time="2025-09-11T00:33:37.472955681Z" level=error msg="ContainerStatus for \"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\": not found" Sep 11 00:33:37.473109 kubelet[2719]: E0911 00:33:37.473081 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\": not found" containerID="9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db" Sep 11 00:33:37.473186 kubelet[2719]: I0911 00:33:37.473141 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db"} err="failed to get container status \"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\": rpc error: code = NotFound desc = an error occurred when try to find container \"9926c60ebcd1e72034c8ef9eb4d7f9ebb1fa3ed5a2805d630db14f17dd28c6db\": not found" Sep 11 00:33:37.473186 kubelet[2719]: I0911 00:33:37.473182 2719 scope.go:117] "RemoveContainer" containerID="cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7" Sep 11 00:33:37.473432 containerd[1555]: time="2025-09-11T00:33:37.473393667Z" level=error msg="ContainerStatus for \"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\": not found" Sep 11 00:33:37.473683 kubelet[2719]: E0911 00:33:37.473562 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\": not found" containerID="cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7" Sep 11 00:33:37.473683 kubelet[2719]: I0911 00:33:37.473601 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7"} err="failed to get container status \"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd7bbf54a85ca7b4b6b16b2db97fc77a9a6930bde33a2b8210fc28749bdca4d7\": not found" Sep 11 00:33:37.473683 kubelet[2719]: I0911 00:33:37.473618 2719 scope.go:117] "RemoveContainer" containerID="c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb" Sep 11 00:33:37.474957 containerd[1555]: time="2025-09-11T00:33:37.474935480Z" level=info msg="RemoveContainer for \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\"" Sep 11 00:33:37.478253 containerd[1555]: time="2025-09-11T00:33:37.478214199Z" level=info msg="RemoveContainer for \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\" returns successfully" Sep 11 00:33:37.478391 kubelet[2719]: I0911 00:33:37.478367 2719 scope.go:117] "RemoveContainer" containerID="c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb" Sep 11 00:33:37.478563 containerd[1555]: time="2025-09-11T00:33:37.478534090Z" level=error msg="ContainerStatus for \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\": not found" Sep 11 00:33:37.478763 kubelet[2719]: E0911 00:33:37.478724 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\": not found" containerID="c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb" Sep 11 00:33:37.478867 kubelet[2719]: I0911 00:33:37.478769 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb"} err="failed to get container status \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"c19070631fc0ec772a43bb850c9aa67bcbe1dddb9be66090a7f34b94bb0549bb\": not found" Sep 11 00:33:38.215838 kubelet[2719]: I0911 00:33:38.215784 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c32543b-f742-4168-a488-aa704f470137" path="/var/lib/kubelet/pods/2c32543b-f742-4168-a488-aa704f470137/volumes" Sep 11 00:33:38.216651 kubelet[2719]: I0911 00:33:38.216619 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66f39d01-e703-4de2-bdd6-245edf607477" path="/var/lib/kubelet/pods/66f39d01-e703-4de2-bdd6-245edf607477/volumes" Sep 11 00:33:38.316756 sshd[4298]: Connection closed by 10.0.0.1 port 41078 Sep 11 00:33:38.317260 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:38.326209 systemd[1]: sshd@24-10.0.0.151:22-10.0.0.1:41078.service: Deactivated successfully. Sep 11 00:33:38.328292 systemd[1]: session-25.scope: Deactivated successfully. Sep 11 00:33:38.329188 systemd-logind[1539]: Session 25 logged out. Waiting for processes to exit. Sep 11 00:33:38.332403 systemd[1]: Started sshd@25-10.0.0.151:22-10.0.0.1:41082.service - OpenSSH per-connection server daemon (10.0.0.1:41082). Sep 11 00:33:38.332992 systemd-logind[1539]: Removed session 25. Sep 11 00:33:38.382841 sshd[4452]: Accepted publickey for core from 10.0.0.1 port 41082 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:38.384532 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:38.389053 systemd-logind[1539]: New session 26 of user core. Sep 11 00:33:38.400438 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 11 00:33:39.005333 sshd[4454]: Connection closed by 10.0.0.1 port 41082 Sep 11 00:33:39.006786 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:39.017031 systemd[1]: sshd@25-10.0.0.151:22-10.0.0.1:41082.service: Deactivated successfully. Sep 11 00:33:39.021452 systemd[1]: session-26.scope: Deactivated successfully. Sep 11 00:33:39.022935 kubelet[2719]: I0911 00:33:39.022892 2719 memory_manager.go:355] "RemoveStaleState removing state" podUID="2c32543b-f742-4168-a488-aa704f470137" containerName="cilium-agent" Sep 11 00:33:39.022935 kubelet[2719]: I0911 00:33:39.022924 2719 memory_manager.go:355] "RemoveStaleState removing state" podUID="66f39d01-e703-4de2-bdd6-245edf607477" containerName="cilium-operator" Sep 11 00:33:39.024392 systemd-logind[1539]: Session 26 logged out. Waiting for processes to exit. Sep 11 00:33:39.027538 systemd[1]: Started sshd@26-10.0.0.151:22-10.0.0.1:41092.service - OpenSSH per-connection server daemon (10.0.0.1:41092). Sep 11 00:33:39.032362 systemd-logind[1539]: Removed session 26. Sep 11 00:33:39.042346 systemd[1]: Created slice kubepods-burstable-podb382527a_9bd8_4db8_82b5_8c6af5e685c1.slice - libcontainer container kubepods-burstable-podb382527a_9bd8_4db8_82b5_8c6af5e685c1.slice. Sep 11 00:33:39.079382 kubelet[2719]: I0911 00:33:39.079334 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b382527a-9bd8-4db8-82b5-8c6af5e685c1-cilium-run\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079382 kubelet[2719]: I0911 00:33:39.079375 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b382527a-9bd8-4db8-82b5-8c6af5e685c1-bpf-maps\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079541 kubelet[2719]: I0911 00:33:39.079396 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b382527a-9bd8-4db8-82b5-8c6af5e685c1-hostproc\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079541 kubelet[2719]: I0911 00:33:39.079414 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b382527a-9bd8-4db8-82b5-8c6af5e685c1-etc-cni-netd\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079541 kubelet[2719]: I0911 00:33:39.079430 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b382527a-9bd8-4db8-82b5-8c6af5e685c1-xtables-lock\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079541 kubelet[2719]: I0911 00:33:39.079445 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b382527a-9bd8-4db8-82b5-8c6af5e685c1-host-proc-sys-kernel\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079541 kubelet[2719]: I0911 00:33:39.079463 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b382527a-9bd8-4db8-82b5-8c6af5e685c1-hubble-tls\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079710 kubelet[2719]: I0911 00:33:39.079559 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b382527a-9bd8-4db8-82b5-8c6af5e685c1-cni-path\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079710 kubelet[2719]: I0911 00:33:39.079597 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b382527a-9bd8-4db8-82b5-8c6af5e685c1-lib-modules\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079710 kubelet[2719]: I0911 00:33:39.079646 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6gzz\" (UniqueName: \"kubernetes.io/projected/b382527a-9bd8-4db8-82b5-8c6af5e685c1-kube-api-access-f6gzz\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079710 kubelet[2719]: I0911 00:33:39.079665 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b382527a-9bd8-4db8-82b5-8c6af5e685c1-cilium-cgroup\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079710 kubelet[2719]: I0911 00:33:39.079689 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b382527a-9bd8-4db8-82b5-8c6af5e685c1-host-proc-sys-net\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079710 kubelet[2719]: I0911 00:33:39.079712 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b382527a-9bd8-4db8-82b5-8c6af5e685c1-clustermesh-secrets\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079838 kubelet[2719]: I0911 00:33:39.079729 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b382527a-9bd8-4db8-82b5-8c6af5e685c1-cilium-config-path\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079838 kubelet[2719]: I0911 00:33:39.079748 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b382527a-9bd8-4db8-82b5-8c6af5e685c1-cilium-ipsec-secrets\") pod \"cilium-clctp\" (UID: \"b382527a-9bd8-4db8-82b5-8c6af5e685c1\") " pod="kube-system/cilium-clctp" Sep 11 00:33:39.079884 sshd[4466]: Accepted publickey for core from 10.0.0.1 port 41092 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:39.081334 sshd-session[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:39.085998 systemd-logind[1539]: New session 27 of user core. Sep 11 00:33:39.096455 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 11 00:33:39.147077 sshd[4468]: Connection closed by 10.0.0.1 port 41092 Sep 11 00:33:39.147432 sshd-session[4466]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:39.156938 systemd[1]: sshd@26-10.0.0.151:22-10.0.0.1:41092.service: Deactivated successfully. Sep 11 00:33:39.159135 systemd[1]: session-27.scope: Deactivated successfully. Sep 11 00:33:39.159912 systemd-logind[1539]: Session 27 logged out. Waiting for processes to exit. Sep 11 00:33:39.163146 systemd[1]: Started sshd@27-10.0.0.151:22-10.0.0.1:41102.service - OpenSSH per-connection server daemon (10.0.0.1:41102). Sep 11 00:33:39.163947 systemd-logind[1539]: Removed session 27. Sep 11 00:33:39.221522 sshd[4476]: Accepted publickey for core from 10.0.0.1 port 41102 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:33:39.222953 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:39.227392 systemd-logind[1539]: New session 28 of user core. Sep 11 00:33:39.243430 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 11 00:33:39.346277 kubelet[2719]: E0911 00:33:39.346124 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:39.347352 containerd[1555]: time="2025-09-11T00:33:39.347279944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-clctp,Uid:b382527a-9bd8-4db8-82b5-8c6af5e685c1,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:39.364686 containerd[1555]: time="2025-09-11T00:33:39.364626858Z" level=info msg="connecting to shim 6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d" address="unix:///run/containerd/s/ad19122d2ac35bf88ce45d82bf1fe39065f4a39d6baaa882097e986abb9b5617" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:39.394465 systemd[1]: Started cri-containerd-6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d.scope - libcontainer container 6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d. Sep 11 00:33:39.420633 containerd[1555]: time="2025-09-11T00:33:39.420564329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-clctp,Uid:b382527a-9bd8-4db8-82b5-8c6af5e685c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d\"" Sep 11 00:33:39.421289 kubelet[2719]: E0911 00:33:39.421243 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:39.423377 containerd[1555]: time="2025-09-11T00:33:39.423349050Z" level=info msg="CreateContainer within sandbox \"6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 11 00:33:39.431750 containerd[1555]: time="2025-09-11T00:33:39.431691089Z" level=info msg="Container 36e31be5e6da3e4b95a75ca2f352cfa81ba81cd2a4fe259bd81e0142ce62cd84: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:39.440052 containerd[1555]: time="2025-09-11T00:33:39.440011829Z" level=info msg="CreateContainer within sandbox \"6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"36e31be5e6da3e4b95a75ca2f352cfa81ba81cd2a4fe259bd81e0142ce62cd84\"" Sep 11 00:33:39.440517 containerd[1555]: time="2025-09-11T00:33:39.440486223Z" level=info msg="StartContainer for \"36e31be5e6da3e4b95a75ca2f352cfa81ba81cd2a4fe259bd81e0142ce62cd84\"" Sep 11 00:33:39.441459 containerd[1555]: time="2025-09-11T00:33:39.441436024Z" level=info msg="connecting to shim 36e31be5e6da3e4b95a75ca2f352cfa81ba81cd2a4fe259bd81e0142ce62cd84" address="unix:///run/containerd/s/ad19122d2ac35bf88ce45d82bf1fe39065f4a39d6baaa882097e986abb9b5617" protocol=ttrpc version=3 Sep 11 00:33:39.463437 systemd[1]: Started cri-containerd-36e31be5e6da3e4b95a75ca2f352cfa81ba81cd2a4fe259bd81e0142ce62cd84.scope - libcontainer container 36e31be5e6da3e4b95a75ca2f352cfa81ba81cd2a4fe259bd81e0142ce62cd84. Sep 11 00:33:39.495389 containerd[1555]: time="2025-09-11T00:33:39.495343023Z" level=info msg="StartContainer for \"36e31be5e6da3e4b95a75ca2f352cfa81ba81cd2a4fe259bd81e0142ce62cd84\" returns successfully" Sep 11 00:33:39.504056 systemd[1]: cri-containerd-36e31be5e6da3e4b95a75ca2f352cfa81ba81cd2a4fe259bd81e0142ce62cd84.scope: Deactivated successfully. Sep 11 00:33:39.505499 containerd[1555]: time="2025-09-11T00:33:39.505455179Z" level=info msg="received exit event container_id:\"36e31be5e6da3e4b95a75ca2f352cfa81ba81cd2a4fe259bd81e0142ce62cd84\" id:\"36e31be5e6da3e4b95a75ca2f352cfa81ba81cd2a4fe259bd81e0142ce62cd84\" pid:4547 exited_at:{seconds:1757550819 nanos:505047752}" Sep 11 00:33:39.505609 containerd[1555]: time="2025-09-11T00:33:39.505471130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36e31be5e6da3e4b95a75ca2f352cfa81ba81cd2a4fe259bd81e0142ce62cd84\" id:\"36e31be5e6da3e4b95a75ca2f352cfa81ba81cd2a4fe259bd81e0142ce62cd84\" pid:4547 exited_at:{seconds:1757550819 nanos:505047752}" Sep 11 00:33:40.416820 kubelet[2719]: E0911 00:33:40.416781 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:40.419428 containerd[1555]: time="2025-09-11T00:33:40.419382595Z" level=info msg="CreateContainer within sandbox \"6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 11 00:33:40.431218 containerd[1555]: time="2025-09-11T00:33:40.430922094Z" level=info msg="Container 99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:40.437564 containerd[1555]: time="2025-09-11T00:33:40.437522396Z" level=info msg="CreateContainer within sandbox \"6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf\"" Sep 11 00:33:40.437941 containerd[1555]: time="2025-09-11T00:33:40.437911788Z" level=info msg="StartContainer for \"99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf\"" Sep 11 00:33:40.440314 containerd[1555]: time="2025-09-11T00:33:40.439379336Z" level=info msg="connecting to shim 99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf" address="unix:///run/containerd/s/ad19122d2ac35bf88ce45d82bf1fe39065f4a39d6baaa882097e986abb9b5617" protocol=ttrpc version=3 Sep 11 00:33:40.466432 systemd[1]: Started cri-containerd-99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf.scope - libcontainer container 99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf. Sep 11 00:33:40.495734 containerd[1555]: time="2025-09-11T00:33:40.495688151Z" level=info msg="StartContainer for \"99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf\" returns successfully" Sep 11 00:33:40.502414 systemd[1]: cri-containerd-99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf.scope: Deactivated successfully. Sep 11 00:33:40.503033 containerd[1555]: time="2025-09-11T00:33:40.502995831Z" level=info msg="received exit event container_id:\"99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf\" id:\"99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf\" pid:4592 exited_at:{seconds:1757550820 nanos:502650413}" Sep 11 00:33:40.503223 containerd[1555]: time="2025-09-11T00:33:40.503174893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf\" id:\"99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf\" pid:4592 exited_at:{seconds:1757550820 nanos:502650413}" Sep 11 00:33:40.524225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99e23f65b708432f551cfedd1a8f4b58ea37f1c5383ad0a86ec306f11ee210cf-rootfs.mount: Deactivated successfully. Sep 11 00:33:41.213322 kubelet[2719]: E0911 00:33:41.213251 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:41.421365 kubelet[2719]: E0911 00:33:41.421328 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:41.424281 containerd[1555]: time="2025-09-11T00:33:41.424205348Z" level=info msg="CreateContainer within sandbox \"6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 11 00:33:41.435164 containerd[1555]: time="2025-09-11T00:33:41.435011921Z" level=info msg="Container 01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:41.443641 containerd[1555]: time="2025-09-11T00:33:41.443590248Z" level=info msg="CreateContainer within sandbox \"6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616\"" Sep 11 00:33:41.444137 containerd[1555]: time="2025-09-11T00:33:41.444098916Z" level=info msg="StartContainer for \"01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616\"" Sep 11 00:33:41.445488 containerd[1555]: time="2025-09-11T00:33:41.445461613Z" level=info msg="connecting to shim 01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616" address="unix:///run/containerd/s/ad19122d2ac35bf88ce45d82bf1fe39065f4a39d6baaa882097e986abb9b5617" protocol=ttrpc version=3 Sep 11 00:33:41.467434 systemd[1]: Started cri-containerd-01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616.scope - libcontainer container 01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616. Sep 11 00:33:41.507160 containerd[1555]: time="2025-09-11T00:33:41.507110760Z" level=info msg="StartContainer for \"01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616\" returns successfully" Sep 11 00:33:41.507572 systemd[1]: cri-containerd-01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616.scope: Deactivated successfully. Sep 11 00:33:41.508534 containerd[1555]: time="2025-09-11T00:33:41.508499396Z" level=info msg="received exit event container_id:\"01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616\" id:\"01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616\" pid:4636 exited_at:{seconds:1757550821 nanos:508209223}" Sep 11 00:33:41.508706 containerd[1555]: time="2025-09-11T00:33:41.508674189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616\" id:\"01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616\" pid:4636 exited_at:{seconds:1757550821 nanos:508209223}" Sep 11 00:33:41.530397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01fdc276002419c807935bb106a865c96d44ed19eab0433e1775bef2b9df2616-rootfs.mount: Deactivated successfully. Sep 11 00:33:42.268345 kubelet[2719]: E0911 00:33:42.268279 2719 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 11 00:33:42.425694 kubelet[2719]: E0911 00:33:42.425658 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:42.427219 containerd[1555]: time="2025-09-11T00:33:42.427169702Z" level=info msg="CreateContainer within sandbox \"6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 11 00:33:42.443925 containerd[1555]: time="2025-09-11T00:33:42.443891283Z" level=info msg="Container a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:42.451028 containerd[1555]: time="2025-09-11T00:33:42.450984975Z" level=info msg="CreateContainer within sandbox \"6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658\"" Sep 11 00:33:42.451583 containerd[1555]: time="2025-09-11T00:33:42.451518111Z" level=info msg="StartContainer for \"a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658\"" Sep 11 00:33:42.452516 containerd[1555]: time="2025-09-11T00:33:42.452488289Z" level=info msg="connecting to shim a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658" address="unix:///run/containerd/s/ad19122d2ac35bf88ce45d82bf1fe39065f4a39d6baaa882097e986abb9b5617" protocol=ttrpc version=3 Sep 11 00:33:42.474436 systemd[1]: Started cri-containerd-a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658.scope - libcontainer container a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658. Sep 11 00:33:42.501604 systemd[1]: cri-containerd-a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658.scope: Deactivated successfully. Sep 11 00:33:42.502141 containerd[1555]: time="2025-09-11T00:33:42.502067076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658\" id:\"a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658\" pid:4674 exited_at:{seconds:1757550822 nanos:501758960}" Sep 11 00:33:42.503152 containerd[1555]: time="2025-09-11T00:33:42.503113220Z" level=info msg="received exit event container_id:\"a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658\" id:\"a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658\" pid:4674 exited_at:{seconds:1757550822 nanos:501758960}" Sep 11 00:33:42.511345 containerd[1555]: time="2025-09-11T00:33:42.511277190Z" level=info msg="StartContainer for \"a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658\" returns successfully" Sep 11 00:33:42.525105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a50842eb82f75aa2a655bb62a235ef6eb2e522089279eda643438cb0510ec658-rootfs.mount: Deactivated successfully. Sep 11 00:33:43.431232 kubelet[2719]: E0911 00:33:43.431188 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:43.435564 containerd[1555]: time="2025-09-11T00:33:43.435471585Z" level=info msg="CreateContainer within sandbox \"6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 11 00:33:43.448457 containerd[1555]: time="2025-09-11T00:33:43.448413127Z" level=info msg="Container 2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:43.457955 containerd[1555]: time="2025-09-11T00:33:43.457910209Z" level=info msg="CreateContainer within sandbox \"6c594bd91d042bcc408dbfeb986f3eba40c58230877fed0cce0995cebcaad40d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e\"" Sep 11 00:33:43.458915 containerd[1555]: time="2025-09-11T00:33:43.458889494Z" level=info msg="StartContainer for \"2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e\"" Sep 11 00:33:43.459836 containerd[1555]: time="2025-09-11T00:33:43.459811289Z" level=info msg="connecting to shim 2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e" address="unix:///run/containerd/s/ad19122d2ac35bf88ce45d82bf1fe39065f4a39d6baaa882097e986abb9b5617" protocol=ttrpc version=3 Sep 11 00:33:43.481437 systemd[1]: Started cri-containerd-2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e.scope - libcontainer container 2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e. Sep 11 00:33:43.521692 containerd[1555]: time="2025-09-11T00:33:43.520915376Z" level=info msg="StartContainer for \"2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e\" returns successfully" Sep 11 00:33:43.583740 containerd[1555]: time="2025-09-11T00:33:43.583690484Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e\" id:\"51f2a228adc01e1c9ec31af74fb8fed5880bf9c93319ffa955c0bb1f6bea31a9\" pid:4742 exited_at:{seconds:1757550823 nanos:583283519}" Sep 11 00:33:43.944324 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 11 00:33:44.437477 kubelet[2719]: E0911 00:33:44.437448 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:44.458562 kubelet[2719]: I0911 00:33:44.458505 2719 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-11T00:33:44Z","lastTransitionTime":"2025-09-11T00:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 11 00:33:45.439236 kubelet[2719]: E0911 00:33:45.439191 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:45.546169 containerd[1555]: time="2025-09-11T00:33:45.546125269Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e\" id:\"5155efac1ae0b4774f06e92048a6f7814c6d0fcf5f73e8ce492fffa814c8e473\" pid:4882 exit_status:1 exited_at:{seconds:1757550825 nanos:545652310}" Sep 11 00:33:47.013031 systemd-networkd[1468]: lxc_health: Link UP Sep 11 00:33:47.013358 systemd-networkd[1468]: lxc_health: Gained carrier Sep 11 00:33:47.348887 kubelet[2719]: E0911 00:33:47.348732 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:47.368552 kubelet[2719]: I0911 00:33:47.368188 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-clctp" podStartSLOduration=8.368167839 podStartE2EDuration="8.368167839s" podCreationTimestamp="2025-09-11 00:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:44.449877158 +0000 UTC m=+82.339798161" watchObservedRunningTime="2025-09-11 00:33:47.368167839 +0000 UTC m=+85.258088842" Sep 11 00:33:47.443307 kubelet[2719]: E0911 00:33:47.443248 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:47.658173 containerd[1555]: time="2025-09-11T00:33:47.658022005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e\" id:\"a0a3e2f6ef08d1bc9092efeab5465195daf9bceb41df298c947a87fcc2a0f524\" pid:5272 exited_at:{seconds:1757550827 nanos:657347743}" Sep 11 00:33:48.448405 kubelet[2719]: E0911 00:33:48.447501 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:49.077495 systemd-networkd[1468]: lxc_health: Gained IPv6LL Sep 11 00:33:49.750589 containerd[1555]: time="2025-09-11T00:33:49.750530270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e\" id:\"e99f9c14e773ff84a0bebc698ddc6863bc1a88b8b37ad675457117e573e21ac7\" pid:5307 exited_at:{seconds:1757550829 nanos:748683110}" Sep 11 00:33:51.843233 containerd[1555]: time="2025-09-11T00:33:51.843114855Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e\" id:\"d4f29e695b0e2db6e300ab130cdf79c973eb979fb41d5bc439e8ee4633575291\" pid:5337 exited_at:{seconds:1757550831 nanos:842457827}" Sep 11 00:33:52.213923 kubelet[2719]: E0911 00:33:52.213824 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:53.926377 containerd[1555]: time="2025-09-11T00:33:53.926140231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c1033d092165b31cb72755926a1b91f54331812cdbdec01cb7f5dae095f4b0e\" id:\"585ec3ad6e668a944c0506e57a4b35762b746e192a97dde0d118c890bc29e2c4\" pid:5362 exited_at:{seconds:1757550833 nanos:925885437}" Sep 11 00:33:53.932502 sshd[4482]: Connection closed by 10.0.0.1 port 41102 Sep 11 00:33:53.932900 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:53.937359 systemd[1]: sshd@27-10.0.0.151:22-10.0.0.1:41102.service: Deactivated successfully. Sep 11 00:33:53.939538 systemd[1]: session-28.scope: Deactivated successfully. Sep 11 00:33:53.940488 systemd-logind[1539]: Session 28 logged out. Waiting for processes to exit. Sep 11 00:33:53.941920 systemd-logind[1539]: Removed session 28. Sep 11 00:33:54.406011 update_engine[1540]: I20250911 00:33:54.405872 1540 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 11 00:33:54.406011 update_engine[1540]: I20250911 00:33:54.405922 1540 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 11 00:33:54.406529 update_engine[1540]: I20250911 00:33:54.406162 1540 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 11 00:33:54.407635 update_engine[1540]: I20250911 00:33:54.407600 1540 omaha_request_params.cc:62] Current group set to beta Sep 11 00:33:54.407958 update_engine[1540]: I20250911 00:33:54.407927 1540 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 11 00:33:54.407958 update_engine[1540]: I20250911 00:33:54.407943 1540 update_attempter.cc:643] Scheduling an action processor start. Sep 11 00:33:54.408010 update_engine[1540]: I20250911 00:33:54.407958 1540 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 11 00:33:54.408010 update_engine[1540]: I20250911 00:33:54.407997 1540 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 11 00:33:54.408085 update_engine[1540]: I20250911 00:33:54.408064 1540 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 11 00:33:54.408085 update_engine[1540]: I20250911 00:33:54.408075 1540 omaha_request_action.cc:272] Request: Sep 11 00:33:54.408085 update_engine[1540]: Sep 11 00:33:54.408085 update_engine[1540]: Sep 11 00:33:54.408085 update_engine[1540]: Sep 11 00:33:54.408085 update_engine[1540]: Sep 11 00:33:54.408085 update_engine[1540]: Sep 11 00:33:54.408085 update_engine[1540]: Sep 11 00:33:54.408085 update_engine[1540]: Sep 11 00:33:54.408085 update_engine[1540]: Sep 11 00:33:54.408085 update_engine[1540]: I20250911 00:33:54.408083 1540 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 11 00:33:54.411064 locksmithd[1608]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 11 00:33:54.411803 update_engine[1540]: I20250911 00:33:54.411757 1540 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 11 00:33:54.412120 update_engine[1540]: I20250911 00:33:54.412084 1540 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 11 00:33:54.424826 update_engine[1540]: E20250911 00:33:54.424784 1540 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 11 00:33:54.424875 update_engine[1540]: I20250911 00:33:54.424840 1540 libcurl_http_fetcher.cc:283] No HTTP response, retry 1