Sep 12 17:46:53.846452 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:34:39 -00 2025 Sep 12 17:46:53.846496 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:46:53.846508 kernel: BIOS-provided physical RAM map: Sep 12 17:46:53.846515 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 12 17:46:53.846522 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 12 17:46:53.846528 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 12 17:46:53.846536 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 12 17:46:53.846543 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 12 17:46:53.846550 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 12 17:46:53.846556 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 12 17:46:53.846567 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 12 17:46:53.846576 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 12 17:46:53.846582 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 12 17:46:53.846589 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 12 17:46:53.846597 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 12 17:46:53.846604 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 12 17:46:53.846613 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 17:46:53.846715 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:46:53.846723 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 17:46:53.846730 kernel: NX (Execute Disable) protection: active Sep 12 17:46:53.846737 kernel: APIC: Static calls initialized Sep 12 17:46:53.846744 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Sep 12 17:46:53.846752 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Sep 12 17:46:53.846759 kernel: extended physical RAM map: Sep 12 17:46:53.846766 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 12 17:46:53.846773 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 12 17:46:53.846780 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 12 17:46:53.846790 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 12 17:46:53.846797 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Sep 12 17:46:53.846804 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Sep 12 17:46:53.846811 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Sep 12 17:46:53.846891 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Sep 12 17:46:53.846899 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Sep 12 17:46:53.846907 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 12 17:46:53.846914 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 12 17:46:53.846921 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 12 17:46:53.846928 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 12 17:46:53.846935 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 12 17:46:53.846944 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 12 17:46:53.846951 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 12 17:46:53.846962 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 12 17:46:53.846969 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 17:46:53.846976 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:46:53.846984 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 17:46:53.846993 kernel: efi: EFI v2.7 by EDK II Sep 12 17:46:53.847000 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 12 17:46:53.847007 kernel: random: crng init done Sep 12 17:46:53.847015 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 12 17:46:53.847022 kernel: secureboot: Secure boot enabled Sep 12 17:46:53.847029 kernel: SMBIOS 2.8 present. Sep 12 17:46:53.847037 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 12 17:46:53.847054 kernel: DMI: Memory slots populated: 1/1 Sep 12 17:46:53.847061 kernel: Hypervisor detected: KVM Sep 12 17:46:53.847069 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:46:53.847076 kernel: kvm-clock: using sched offset of 4830094113 cycles Sep 12 17:46:53.847086 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:46:53.847094 kernel: tsc: Detected 2794.750 MHz processor Sep 12 17:46:53.847102 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:46:53.847109 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:46:53.847116 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 12 17:46:53.847124 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:46:53.847132 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:46:53.847139 kernel: Using GB pages for direct mapping Sep 12 17:46:53.847147 kernel: ACPI: Early table checksum verification disabled Sep 12 17:46:53.847156 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 12 17:46:53.847164 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:46:53.847172 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:46:53.847179 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:46:53.847186 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 12 17:46:53.847194 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:46:53.847201 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:46:53.847209 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:46:53.847216 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:46:53.847226 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 17:46:53.847233 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 12 17:46:53.847241 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 12 17:46:53.847248 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 12 17:46:53.847255 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 12 17:46:53.847263 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 12 17:46:53.847270 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 12 17:46:53.847278 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 12 17:46:53.847285 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 12 17:46:53.847294 kernel: No NUMA configuration found Sep 12 17:46:53.847302 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 12 17:46:53.847309 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 12 17:46:53.847317 kernel: Zone ranges: Sep 12 17:46:53.847324 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:46:53.847332 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 12 17:46:53.847348 kernel: Normal empty Sep 12 17:46:53.847356 kernel: Device empty Sep 12 17:46:53.847371 kernel: Movable zone start for each node Sep 12 17:46:53.847381 kernel: Early memory node ranges Sep 12 17:46:53.847388 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 12 17:46:53.847396 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 12 17:46:53.847403 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 12 17:46:53.847410 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 12 17:46:53.847422 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 12 17:46:53.847430 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 12 17:46:53.847437 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:46:53.847444 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 12 17:46:53.847454 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:46:53.847461 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 17:46:53.847469 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 12 17:46:53.847476 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 12 17:46:53.847484 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:46:53.847491 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:46:53.847499 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:46:53.847506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:46:53.847514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:46:53.847521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:46:53.847531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:46:53.847538 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:46:53.847545 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:46:53.847553 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:46:53.847560 kernel: TSC deadline timer available Sep 12 17:46:53.847568 kernel: CPU topo: Max. logical packages: 1 Sep 12 17:46:53.847575 kernel: CPU topo: Max. logical dies: 1 Sep 12 17:46:53.847583 kernel: CPU topo: Max. dies per package: 1 Sep 12 17:46:53.847598 kernel: CPU topo: Max. threads per core: 1 Sep 12 17:46:53.847606 kernel: CPU topo: Num. cores per package: 4 Sep 12 17:46:53.847614 kernel: CPU topo: Num. threads per package: 4 Sep 12 17:46:53.847635 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 17:46:53.847645 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:46:53.847653 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 17:46:53.847661 kernel: kvm-guest: setup PV sched yield Sep 12 17:46:53.847669 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 12 17:46:53.847676 kernel: Booting paravirtualized kernel on KVM Sep 12 17:46:53.847686 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:46:53.847694 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 17:46:53.847703 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 17:46:53.847710 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 17:46:53.847718 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 17:46:53.847726 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:46:53.847733 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:46:53.847742 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:46:53.847753 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:46:53.847761 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:46:53.847769 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:46:53.847776 kernel: Fallback order for Node 0: 0 Sep 12 17:46:53.847784 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 12 17:46:53.847792 kernel: Policy zone: DMA32 Sep 12 17:46:53.847799 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:46:53.847807 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:46:53.847815 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 17:46:53.847825 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 17:46:53.847833 kernel: Dynamic Preempt: voluntary Sep 12 17:46:53.847840 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:46:53.847849 kernel: rcu: RCU event tracing is enabled. Sep 12 17:46:53.847858 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:46:53.847867 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:46:53.847875 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:46:53.847883 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:46:53.847893 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:46:53.847904 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:46:53.847914 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:46:53.847922 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:46:53.847930 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:46:53.847938 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 17:46:53.847946 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:46:53.847954 kernel: Console: colour dummy device 80x25 Sep 12 17:46:53.847962 kernel: printk: legacy console [ttyS0] enabled Sep 12 17:46:53.847970 kernel: ACPI: Core revision 20240827 Sep 12 17:46:53.847980 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:46:53.847988 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:46:53.847996 kernel: x2apic enabled Sep 12 17:46:53.848003 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:46:53.848011 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 17:46:53.848019 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 17:46:53.848027 kernel: kvm-guest: setup PV IPIs Sep 12 17:46:53.848035 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:46:53.848052 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 12 17:46:53.848062 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 12 17:46:53.848070 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:46:53.848078 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:46:53.848086 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:46:53.848094 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:46:53.848101 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:46:53.848109 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:46:53.848117 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:46:53.848125 kernel: active return thunk: retbleed_return_thunk Sep 12 17:46:53.848135 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:46:53.848143 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:46:53.848151 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:46:53.848158 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 17:46:53.848167 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 17:46:53.848175 kernel: active return thunk: srso_return_thunk Sep 12 17:46:53.848183 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 17:46:53.848190 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:46:53.848201 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:46:53.848208 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:46:53.848216 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:46:53.848224 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:46:53.848232 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:46:53.848240 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:46:53.848247 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:46:53.848255 kernel: landlock: Up and running. Sep 12 17:46:53.848263 kernel: SELinux: Initializing. Sep 12 17:46:53.848273 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:46:53.848281 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:46:53.848289 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:46:53.848297 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:46:53.848304 kernel: ... version: 0 Sep 12 17:46:53.848312 kernel: ... bit width: 48 Sep 12 17:46:53.848320 kernel: ... generic registers: 6 Sep 12 17:46:53.848327 kernel: ... value mask: 0000ffffffffffff Sep 12 17:46:53.848335 kernel: ... max period: 00007fffffffffff Sep 12 17:46:53.848345 kernel: ... fixed-purpose events: 0 Sep 12 17:46:53.848352 kernel: ... event mask: 000000000000003f Sep 12 17:46:53.848360 kernel: signal: max sigframe size: 1776 Sep 12 17:46:53.848368 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:46:53.848376 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:46:53.848384 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:46:53.848391 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:46:53.848399 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:46:53.848407 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 17:46:53.848416 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:46:53.848424 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 12 17:46:53.848432 kernel: Memory: 2409220K/2552216K available (14336K kernel code, 2432K rwdata, 9960K rodata, 54040K init, 2924K bss, 137064K reserved, 0K cma-reserved) Sep 12 17:46:53.848440 kernel: devtmpfs: initialized Sep 12 17:46:53.848448 kernel: x86/mm: Memory block size: 128MB Sep 12 17:46:53.848456 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 12 17:46:53.848464 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 12 17:46:53.848472 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:46:53.848480 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:46:53.848489 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:46:53.848497 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:46:53.848505 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:46:53.848513 kernel: audit: type=2000 audit(1757699211.415:1): state=initialized audit_enabled=0 res=1 Sep 12 17:46:53.848521 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:46:53.848528 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:46:53.848536 kernel: cpuidle: using governor menu Sep 12 17:46:53.848544 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:46:53.848551 kernel: dca service started, version 1.12.1 Sep 12 17:46:53.848561 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 12 17:46:53.848569 kernel: PCI: Using configuration type 1 for base access Sep 12 17:46:53.848577 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:46:53.848585 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:46:53.848592 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:46:53.848600 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:46:53.848608 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:46:53.848616 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:46:53.848635 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:46:53.848645 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:46:53.848653 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:46:53.848660 kernel: ACPI: Interpreter enabled Sep 12 17:46:53.848668 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:46:53.848676 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:46:53.848683 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:46:53.848691 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:46:53.848699 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:46:53.848707 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:46:53.848894 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:46:53.849016 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:46:53.849143 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:46:53.849154 kernel: PCI host bridge to bus 0000:00 Sep 12 17:46:53.849273 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:46:53.849384 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:46:53.849495 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:46:53.849601 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 12 17:46:53.849729 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 12 17:46:53.849838 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 12 17:46:53.849943 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:46:53.850088 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:46:53.850214 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 17:46:53.850334 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 12 17:46:53.850449 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 12 17:46:53.850563 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 17:46:53.850695 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:46:53.850822 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 17:46:53.850939 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 12 17:46:53.851073 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 12 17:46:53.851192 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 12 17:46:53.851320 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 17:46:53.851437 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 12 17:46:53.851553 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 12 17:46:53.851695 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 12 17:46:53.851822 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 17:46:53.851954 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 12 17:46:53.852082 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 12 17:46:53.852198 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 12 17:46:53.852312 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 12 17:46:53.852436 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 17:46:53.852551 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:46:53.852698 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 17:46:53.852824 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 12 17:46:53.852939 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 12 17:46:53.853073 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 17:46:53.853190 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 12 17:46:53.853200 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:46:53.853208 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:46:53.853216 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:46:53.853227 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:46:53.853234 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:46:53.853242 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:46:53.853250 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:46:53.853257 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:46:53.853265 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:46:53.853273 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:46:53.853280 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:46:53.853288 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:46:53.853298 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:46:53.853305 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:46:53.853313 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:46:53.853321 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:46:53.853328 kernel: iommu: Default domain type: Translated Sep 12 17:46:53.853336 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:46:53.853344 kernel: efivars: Registered efivars operations Sep 12 17:46:53.853351 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:46:53.853359 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:46:53.853369 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 12 17:46:53.853376 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Sep 12 17:46:53.853384 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Sep 12 17:46:53.853391 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 12 17:46:53.853399 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 12 17:46:53.853519 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:46:53.853649 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:46:53.853765 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:46:53.853775 kernel: vgaarb: loaded Sep 12 17:46:53.853787 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:46:53.853794 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:46:53.853802 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:46:53.853810 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:46:53.853817 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:46:53.853825 kernel: pnp: PnP ACPI init Sep 12 17:46:53.853953 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 12 17:46:53.853964 kernel: pnp: PnP ACPI: found 6 devices Sep 12 17:46:53.853975 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:46:53.853983 kernel: NET: Registered PF_INET protocol family Sep 12 17:46:53.853991 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:46:53.853999 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:46:53.854007 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:46:53.854015 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:46:53.854023 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:46:53.854031 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:46:53.854048 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:46:53.854058 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:46:53.854066 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:46:53.854074 kernel: NET: Registered PF_XDP protocol family Sep 12 17:46:53.854194 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 12 17:46:53.854311 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 12 17:46:53.854417 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:46:53.854524 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:46:53.854661 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:46:53.854784 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 12 17:46:53.854903 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 12 17:46:53.855048 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 12 17:46:53.855067 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:46:53.855078 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 12 17:46:53.855088 kernel: Initialise system trusted keyrings Sep 12 17:46:53.855099 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:46:53.855109 kernel: Key type asymmetric registered Sep 12 17:46:53.855121 kernel: Asymmetric key parser 'x509' registered Sep 12 17:46:53.855141 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:46:53.855152 kernel: io scheduler mq-deadline registered Sep 12 17:46:53.855160 kernel: io scheduler kyber registered Sep 12 17:46:53.855168 kernel: io scheduler bfq registered Sep 12 17:46:53.855176 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:46:53.855185 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:46:53.855193 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:46:53.855204 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 17:46:53.855217 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:46:53.855229 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:46:53.855239 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:46:53.855247 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:46:53.855255 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:46:53.855385 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 17:46:53.855398 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:46:53.855505 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 17:46:53.855655 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T17:46:53 UTC (1757699213) Sep 12 17:46:53.855771 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 12 17:46:53.855781 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:46:53.855789 kernel: efifb: probing for efifb Sep 12 17:46:53.855798 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 12 17:46:53.855806 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 12 17:46:53.855814 kernel: efifb: scrolling: redraw Sep 12 17:46:53.855822 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:46:53.855831 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 17:46:53.855845 kernel: fb0: EFI VGA frame buffer device Sep 12 17:46:53.855857 kernel: pstore: Using crash dump compression: deflate Sep 12 17:46:53.855868 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:46:53.855878 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:46:53.855888 kernel: Segment Routing with IPv6 Sep 12 17:46:53.855898 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:46:53.855911 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:46:53.855921 kernel: Key type dns_resolver registered Sep 12 17:46:53.855931 kernel: IPI shorthand broadcast: enabled Sep 12 17:46:53.855941 kernel: sched_clock: Marking stable (2826002259, 141286452)->(2987324002, -20035291) Sep 12 17:46:53.855951 kernel: registered taskstats version 1 Sep 12 17:46:53.855961 kernel: Loading compiled-in X.509 certificates Sep 12 17:46:53.855972 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: f1ae8d6e9bfae84d90f4136cf098b0465b2a5bd7' Sep 12 17:46:53.855982 kernel: Demotion targets for Node 0: null Sep 12 17:46:53.855992 kernel: Key type .fscrypt registered Sep 12 17:46:53.856005 kernel: Key type fscrypt-provisioning registered Sep 12 17:46:53.856015 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:46:53.856025 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:46:53.856036 kernel: ima: No architecture policies found Sep 12 17:46:53.856056 kernel: clk: Disabling unused clocks Sep 12 17:46:53.856067 kernel: Warning: unable to open an initial console. Sep 12 17:46:53.856075 kernel: Freeing unused kernel image (initmem) memory: 54040K Sep 12 17:46:53.856083 kernel: Write protecting the kernel read-only data: 24576k Sep 12 17:46:53.856092 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 17:46:53.856102 kernel: Run /init as init process Sep 12 17:46:53.856110 kernel: with arguments: Sep 12 17:46:53.856118 kernel: /init Sep 12 17:46:53.856126 kernel: with environment: Sep 12 17:46:53.856135 kernel: HOME=/ Sep 12 17:46:53.856143 kernel: TERM=linux Sep 12 17:46:53.856151 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:46:53.856160 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:46:53.856174 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:46:53.856183 systemd[1]: Detected virtualization kvm. Sep 12 17:46:53.856192 systemd[1]: Detected architecture x86-64. Sep 12 17:46:53.856200 systemd[1]: Running in initrd. Sep 12 17:46:53.856208 systemd[1]: No hostname configured, using default hostname. Sep 12 17:46:53.856217 systemd[1]: Hostname set to . Sep 12 17:46:53.856226 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:46:53.856236 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:46:53.856245 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:46:53.856254 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:46:53.856263 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:46:53.856272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:46:53.856281 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:46:53.856291 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:46:53.856303 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:46:53.856312 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:46:53.856321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:46:53.856330 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:46:53.856339 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:46:53.856348 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:46:53.856357 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:46:53.856366 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:46:53.856376 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:46:53.856387 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:46:53.856396 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:46:53.856405 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:46:53.856414 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:46:53.856423 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:46:53.856432 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:46:53.856441 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:46:53.856450 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:46:53.856461 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:46:53.856469 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:46:53.856479 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:46:53.856488 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:46:53.856497 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:46:53.856506 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:46:53.856515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:46:53.856524 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:46:53.856536 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:46:53.856545 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:46:53.856554 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:46:53.856585 systemd-journald[219]: Collecting audit messages is disabled. Sep 12 17:46:53.856610 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:46:53.856633 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:46:53.856642 systemd-journald[219]: Journal started Sep 12 17:46:53.856665 systemd-journald[219]: Runtime Journal (/run/log/journal/3f39a85f7ea1463dbc631a5052e33466) is 6M, max 48.2M, 42.2M free. Sep 12 17:46:53.845375 systemd-modules-load[220]: Inserted module 'overlay' Sep 12 17:46:53.901171 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:46:53.904648 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:46:53.906404 systemd-modules-load[220]: Inserted module 'br_netfilter' Sep 12 17:46:53.907364 kernel: Bridge firewalling registered Sep 12 17:46:53.908786 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:46:53.911478 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:46:53.916977 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:46:53.918940 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:46:53.920962 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:46:53.924831 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:46:53.931901 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:46:53.934800 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:46:53.938050 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:46:53.939659 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:46:53.941615 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:46:53.945141 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:46:53.967012 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:46:53.988787 systemd-resolved[261]: Positive Trust Anchors: Sep 12 17:46:53.988804 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:46:53.988834 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:46:53.991307 systemd-resolved[261]: Defaulting to hostname 'linux'. Sep 12 17:46:53.997275 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:46:53.998448 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:46:54.083670 kernel: SCSI subsystem initialized Sep 12 17:46:54.092653 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:46:54.105661 kernel: iscsi: registered transport (tcp) Sep 12 17:46:54.141659 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:46:54.141732 kernel: QLogic iSCSI HBA Driver Sep 12 17:46:54.164452 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:46:54.180709 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:46:54.181655 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:46:54.239165 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:46:54.240806 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:46:54.296654 kernel: raid6: avx2x4 gen() 25070 MB/s Sep 12 17:46:54.313651 kernel: raid6: avx2x2 gen() 30846 MB/s Sep 12 17:46:54.330700 kernel: raid6: avx2x1 gen() 25437 MB/s Sep 12 17:46:54.330745 kernel: raid6: using algorithm avx2x2 gen() 30846 MB/s Sep 12 17:46:54.348713 kernel: raid6: .... xor() 19576 MB/s, rmw enabled Sep 12 17:46:54.348739 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:46:54.369658 kernel: xor: automatically using best checksumming function avx Sep 12 17:46:54.536664 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:46:54.545377 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:46:54.548166 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:46:54.584237 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 12 17:46:54.590952 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:46:54.594249 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:46:54.624050 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Sep 12 17:46:54.654403 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:46:54.655810 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:46:54.907197 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:46:54.911133 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:46:54.948653 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 17:46:54.948846 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:46:54.952660 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:46:54.971645 kernel: AES CTR mode by8 optimization enabled Sep 12 17:46:54.993823 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:46:54.993872 kernel: GPT:9289727 != 19775487 Sep 12 17:46:54.993883 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:46:54.993912 kernel: GPT:9289727 != 19775487 Sep 12 17:46:54.995542 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:46:54.995600 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:46:54.999651 kernel: libata version 3.00 loaded. Sep 12 17:46:55.005659 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 17:46:55.009517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:46:55.017531 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:46:55.018047 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:46:55.018060 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 17:46:55.018241 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 17:46:55.019107 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:46:55.009661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:46:55.017434 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:46:55.024136 kernel: scsi host0: ahci Sep 12 17:46:55.024378 kernel: scsi host1: ahci Sep 12 17:46:55.020989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:46:55.027586 kernel: scsi host2: ahci Sep 12 17:46:55.027810 kernel: scsi host3: ahci Sep 12 17:46:55.032664 kernel: scsi host4: ahci Sep 12 17:46:55.030444 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:46:55.035939 kernel: scsi host5: ahci Sep 12 17:46:55.036144 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 12 17:46:55.036161 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 12 17:46:55.039646 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 12 17:46:55.039673 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 12 17:46:55.039695 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 12 17:46:55.041019 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 12 17:46:55.065461 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:46:55.069994 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:46:55.081065 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:46:55.091757 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:46:55.091875 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:46:55.114059 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:46:55.117036 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:46:55.306765 disk-uuid[634]: Primary Header is updated. Sep 12 17:46:55.306765 disk-uuid[634]: Secondary Entries is updated. Sep 12 17:46:55.306765 disk-uuid[634]: Secondary Header is updated. Sep 12 17:46:55.310639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:46:55.315656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:46:55.352188 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:46:55.352233 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 17:46:55.352245 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:46:55.352255 kernel: ata3.00: applying bridge limits Sep 12 17:46:55.352266 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 17:46:55.354640 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:46:55.354666 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 17:46:55.354677 kernel: ata3.00: configured for UDMA/100 Sep 12 17:46:55.354688 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:46:55.355673 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:46:55.357252 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:46:55.357359 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:46:55.415062 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:46:55.415386 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:46:55.452695 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:46:55.788329 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:46:55.790002 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:46:55.791709 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:46:55.792843 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:46:55.796419 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:46:55.825449 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:46:56.398467 disk-uuid[635]: The operation has completed successfully. Sep 12 17:46:56.399713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:46:56.432830 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:46:56.432951 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:46:56.479683 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:46:56.552112 sh[663]: Success Sep 12 17:46:56.641926 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:46:56.641965 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:46:56.642992 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:46:56.651650 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 17:46:56.680751 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:46:56.682695 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:46:56.707103 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:46:56.712998 kernel: BTRFS: device fsid 74707491-1b86-4926-8bdb-c533ce2a0c32 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (675) Sep 12 17:46:56.713027 kernel: BTRFS info (device dm-0): first mount of filesystem 74707491-1b86-4926-8bdb-c533ce2a0c32 Sep 12 17:46:56.713044 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:46:56.718200 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:46:56.718224 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:46:56.719364 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:46:56.719907 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:46:56.721057 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:46:56.724821 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:46:56.727265 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:46:56.772671 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 12 17:46:56.774651 kernel: BTRFS info (device vda6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:46:56.774679 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:46:56.777672 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:46:56.777697 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:46:56.782645 kernel: BTRFS info (device vda6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:46:56.783127 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:46:56.786909 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:46:56.848857 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:46:56.854232 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:46:57.130210 ignition[773]: Ignition 2.21.0 Sep 12 17:46:57.130222 ignition[773]: Stage: fetch-offline Sep 12 17:46:57.130253 ignition[773]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:46:57.130262 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:46:57.130345 ignition[773]: parsed url from cmdline: "" Sep 12 17:46:57.130348 ignition[773]: no config URL provided Sep 12 17:46:57.130354 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:46:57.130362 ignition[773]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:46:57.130382 ignition[773]: op(1): [started] loading QEMU firmware config module Sep 12 17:46:57.130387 ignition[773]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:46:57.143884 ignition[773]: op(1): [finished] loading QEMU firmware config module Sep 12 17:46:57.215756 systemd-networkd[845]: lo: Link UP Sep 12 17:46:57.215764 systemd-networkd[845]: lo: Gained carrier Sep 12 17:46:57.217373 systemd-networkd[845]: Enumeration completed Sep 12 17:46:57.217761 systemd-networkd[845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:46:57.217765 systemd-networkd[845]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:46:57.312104 systemd-networkd[845]: eth0: Link UP Sep 12 17:46:57.336024 systemd-networkd[845]: eth0: Gained carrier Sep 12 17:46:57.336033 systemd-networkd[845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:46:57.336307 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:46:57.340801 systemd[1]: Reached target network.target - Network. Sep 12 17:46:57.363662 systemd-networkd[845]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:46:57.383423 ignition[773]: parsing config with SHA512: 509ee7cb0602033b50a43bd316dd36ba2fbcf6a8c1e41ee4f487180cc67120a5389ac9e887d3eb51109eb8b89600b6fc4c4c0980f3819782d464551e53810b40 Sep 12 17:46:57.387336 unknown[773]: fetched base config from "system" Sep 12 17:46:57.387345 unknown[773]: fetched user config from "qemu" Sep 12 17:46:57.387694 ignition[773]: fetch-offline: fetch-offline passed Sep 12 17:46:57.387744 ignition[773]: Ignition finished successfully Sep 12 17:46:57.390663 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:46:57.392126 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:46:57.392908 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:46:57.437483 ignition[862]: Ignition 2.21.0 Sep 12 17:46:57.437496 ignition[862]: Stage: kargs Sep 12 17:46:57.437693 ignition[862]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:46:57.437707 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:46:57.439943 ignition[862]: kargs: kargs passed Sep 12 17:46:57.440061 ignition[862]: Ignition finished successfully Sep 12 17:46:57.445065 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:46:57.447356 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:46:57.489002 ignition[870]: Ignition 2.21.0 Sep 12 17:46:57.489013 ignition[870]: Stage: disks Sep 12 17:46:57.489193 ignition[870]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:46:57.489203 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:46:57.491660 ignition[870]: disks: disks passed Sep 12 17:46:57.491721 ignition[870]: Ignition finished successfully Sep 12 17:46:57.494604 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:46:57.496925 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:46:57.497010 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:46:57.499032 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:46:57.499350 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:46:57.499847 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:46:57.506316 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:46:57.572251 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:46:57.999474 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:46:58.003110 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:46:58.191664 kernel: EXT4-fs (vda9): mounted filesystem 26739aba-b0be-4ce3-bfbd-ca4dbcbe2426 r/w with ordered data mode. Quota mode: none. Sep 12 17:46:58.192522 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:46:58.193880 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:46:58.196388 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:46:58.198123 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:46:58.199135 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:46:58.199176 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:46:58.199200 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:46:58.214163 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:46:58.216583 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:46:58.221429 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (889) Sep 12 17:46:58.221449 kernel: BTRFS info (device vda6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:46:58.221460 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:46:58.223725 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:46:58.223746 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:46:58.225153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:46:58.263724 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:46:58.268803 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:46:58.273469 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:46:58.277971 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:46:58.435768 systemd-networkd[845]: eth0: Gained IPv6LL Sep 12 17:46:58.503830 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:46:58.532172 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:46:58.533980 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:46:58.560660 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:46:58.562055 kernel: BTRFS info (device vda6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:46:58.573955 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:46:58.980991 ignition[1003]: INFO : Ignition 2.21.0 Sep 12 17:46:58.980991 ignition[1003]: INFO : Stage: mount Sep 12 17:46:58.983480 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:46:58.983480 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:46:58.985678 ignition[1003]: INFO : mount: mount passed Sep 12 17:46:58.985678 ignition[1003]: INFO : Ignition finished successfully Sep 12 17:46:58.989615 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:46:58.992574 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:46:59.194125 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:46:59.220650 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Sep 12 17:46:59.222752 kernel: BTRFS info (device vda6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:46:59.222770 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:46:59.225643 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:46:59.225662 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:46:59.227069 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:46:59.274288 ignition[1032]: INFO : Ignition 2.21.0 Sep 12 17:46:59.274288 ignition[1032]: INFO : Stage: files Sep 12 17:46:59.276404 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:46:59.276404 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:46:59.276404 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:46:59.279858 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:46:59.279858 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:46:59.282923 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:46:59.282923 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:46:59.282923 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:46:59.282053 unknown[1032]: wrote ssh authorized keys file for user: core Sep 12 17:46:59.288240 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:46:59.290136 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 17:46:59.448478 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:46:59.695702 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:46:59.727271 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:46:59.727271 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 17:46:59.966828 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:47:00.281315 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:47:00.281315 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:47:00.285121 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:47:00.285121 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:47:00.285121 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:47:00.285121 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:47:00.285121 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:47:00.285121 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:47:00.285121 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:47:00.414640 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:47:00.416655 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:47:00.416655 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:47:00.791116 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:47:00.791116 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:47:00.796210 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 17:47:01.209126 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:47:01.826943 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:47:01.826943 ignition[1032]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:47:01.830617 ignition[1032]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:47:01.836943 ignition[1032]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:47:01.836943 ignition[1032]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:47:01.836943 ignition[1032]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 17:47:01.841182 ignition[1032]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:47:01.841182 ignition[1032]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:47:01.841182 ignition[1032]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 17:47:01.841182 ignition[1032]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:47:01.866885 ignition[1032]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:47:01.871434 ignition[1032]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:47:01.873142 ignition[1032]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:47:01.873142 ignition[1032]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:47:01.875825 ignition[1032]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:47:01.875825 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:47:01.875825 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:47:01.875825 ignition[1032]: INFO : files: files passed Sep 12 17:47:01.875825 ignition[1032]: INFO : Ignition finished successfully Sep 12 17:47:01.882567 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:47:01.885330 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:47:01.887508 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:47:01.908731 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:47:01.908979 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:47:01.912858 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:47:01.916377 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:47:01.918717 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:47:01.920771 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:47:01.922026 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:47:01.924942 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:47:01.927729 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:47:02.010172 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:47:02.010316 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:47:02.011542 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:47:02.013707 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:47:02.014067 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:47:02.014942 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:47:02.033825 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:47:02.035447 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:47:02.074948 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:47:02.075144 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:47:02.078348 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:47:02.079456 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:47:02.079592 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:47:02.084043 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:47:02.085232 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:47:02.086211 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:47:02.086576 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:47:02.087144 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:47:02.087462 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:47:02.087952 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:47:02.088266 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:47:02.088595 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:47:02.101442 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:47:02.102529 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:47:02.104485 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:47:02.104608 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:47:02.106730 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:47:02.107240 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:47:02.107603 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:47:02.113646 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:47:02.117304 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:47:02.117448 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:47:02.120722 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:47:02.120869 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:47:02.122741 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:47:02.124559 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:47:02.128709 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:47:02.128904 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:47:02.131928 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:47:02.132463 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:47:02.132610 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:47:02.135758 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:47:02.135862 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:47:02.138296 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:47:02.138425 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:47:02.141334 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:47:02.141455 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:47:02.146023 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:47:02.147851 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:47:02.151884 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:47:02.154864 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:47:02.157280 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:47:02.158421 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:47:02.164699 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:47:02.164933 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:47:02.180642 ignition[1087]: INFO : Ignition 2.21.0 Sep 12 17:47:02.180642 ignition[1087]: INFO : Stage: umount Sep 12 17:47:02.180642 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:02.183780 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:47:02.183780 ignition[1087]: INFO : umount: umount passed Sep 12 17:47:02.183780 ignition[1087]: INFO : Ignition finished successfully Sep 12 17:47:02.184943 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:47:02.185093 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:47:02.188139 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:47:02.188689 systemd[1]: Stopped target network.target - Network. Sep 12 17:47:02.189127 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:47:02.189183 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:47:02.190910 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:47:02.190957 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:47:02.191265 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:47:02.191310 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:47:02.191611 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:47:02.191673 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:47:02.192218 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:47:02.198941 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:47:02.203026 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:47:02.203160 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:47:02.205153 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:47:02.205209 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:47:02.207752 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:47:02.207961 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:47:02.211528 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:47:02.211789 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:47:02.211925 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:47:02.215045 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:47:02.215757 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:47:02.216169 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:47:02.216221 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:47:02.217449 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:47:02.219998 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:47:02.220052 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:47:02.220389 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:47:02.220439 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:47:02.234453 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:47:02.234507 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:47:02.235520 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:47:02.235566 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:47:02.238694 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:47:02.240532 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:47:02.240591 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:47:02.256285 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:47:02.256409 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:47:02.258535 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:47:02.258721 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:47:02.259694 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:47:02.259758 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:47:02.262183 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:47:02.262226 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:47:02.263391 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:47:02.263437 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:47:02.264219 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:47:02.264261 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:47:02.269656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:47:02.269708 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:47:02.273783 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:47:02.274957 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:47:02.275006 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:47:02.280010 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:47:02.280059 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:47:02.283741 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:47:02.283796 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:47:02.287124 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:47:02.287168 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:47:02.288236 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:02.288279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:02.293874 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:47:02.293934 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 17:47:02.293977 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:47:02.294023 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:47:02.308406 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:47:02.308524 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:47:02.312252 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:47:02.314660 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:47:02.339117 systemd[1]: Switching root. Sep 12 17:47:02.380171 systemd-journald[219]: Journal stopped Sep 12 17:47:03.799329 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Sep 12 17:47:03.799395 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:47:03.799409 kernel: SELinux: policy capability open_perms=1 Sep 12 17:47:03.799429 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:47:03.799444 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:47:03.799455 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:47:03.799466 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:47:03.799477 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:47:03.799488 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:47:03.799499 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:47:03.799511 kernel: audit: type=1403 audit(1757699222.971:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:47:03.799528 systemd[1]: Successfully loaded SELinux policy in 61.179ms. Sep 12 17:47:03.799544 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.222ms. Sep 12 17:47:03.799557 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:47:03.799569 systemd[1]: Detected virtualization kvm. Sep 12 17:47:03.799581 systemd[1]: Detected architecture x86-64. Sep 12 17:47:03.799592 systemd[1]: Detected first boot. Sep 12 17:47:03.799604 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:47:03.799616 zram_generator::config[1134]: No configuration found. Sep 12 17:47:03.799645 kernel: Guest personality initialized and is inactive Sep 12 17:47:03.799659 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 17:47:03.799670 kernel: Initialized host personality Sep 12 17:47:03.799691 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:47:03.799716 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:47:03.799729 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:47:03.799741 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:47:03.799752 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:47:03.799764 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:47:03.799776 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:47:03.799799 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:47:03.799811 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:47:03.799822 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:47:03.799834 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:47:03.799846 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:47:03.799858 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:47:03.799875 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:47:03.799887 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:47:03.799899 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:47:03.799913 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:47:03.799925 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:47:03.799939 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:47:03.799951 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:47:03.799963 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:47:03.799977 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:47:03.799989 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:47:03.800003 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:47:03.800015 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:47:03.800027 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:47:03.800038 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:47:03.800050 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:47:03.800062 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:47:03.800079 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:47:03.800095 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:47:03.800106 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:47:03.800120 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:47:03.800132 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:47:03.800144 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:47:03.800155 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:47:03.800167 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:47:03.800178 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:47:03.800190 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:47:03.800204 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:47:03.800216 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:47:03.800230 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:03.800242 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:47:03.800254 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:47:03.800266 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:47:03.800278 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:47:03.800290 systemd[1]: Reached target machines.target - Containers. Sep 12 17:47:03.800302 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:47:03.800314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:03.800326 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:47:03.800339 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:47:03.800351 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:47:03.800363 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:47:03.800374 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:47:03.800386 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:47:03.800398 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:47:03.800410 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:47:03.800422 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:47:03.800436 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:47:03.800447 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:47:03.800460 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:47:03.800472 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:03.800484 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:47:03.800496 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:47:03.800508 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:47:03.800520 kernel: loop: module loaded Sep 12 17:47:03.800556 systemd-journald[1198]: Collecting audit messages is disabled. Sep 12 17:47:03.800580 kernel: fuse: init (API version 7.41) Sep 12 17:47:03.800592 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:47:03.800604 systemd-journald[1198]: Journal started Sep 12 17:47:03.800645 systemd-journald[1198]: Runtime Journal (/run/log/journal/3f39a85f7ea1463dbc631a5052e33466) is 6M, max 48.2M, 42.2M free. Sep 12 17:47:03.495607 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:47:03.515617 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:47:03.516150 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:47:03.807123 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:47:03.811639 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:47:03.813810 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:47:03.813886 systemd[1]: Stopped verity-setup.service. Sep 12 17:47:03.855737 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:03.857653 kernel: ACPI: bus type drm_connector registered Sep 12 17:47:03.858675 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:47:03.860203 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:47:03.861311 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:47:03.862439 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:47:03.863455 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:47:03.864578 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:47:03.865734 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:47:03.869523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:47:03.871226 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:47:03.871503 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:47:03.873221 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:47:03.873458 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:47:03.875124 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:47:03.875382 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:47:03.876945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:47:03.877203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:47:03.878733 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:47:03.878995 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:47:03.880335 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:47:03.880588 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:47:03.882079 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:47:03.883754 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:47:03.898316 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:47:03.900952 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:47:03.903076 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:47:03.904161 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:47:03.904185 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:47:03.906190 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:47:03.910523 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:47:03.913325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:03.916231 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:47:03.921084 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:47:03.922756 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:47:03.925762 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:47:03.926944 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:47:03.929188 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:47:03.939753 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:47:03.942892 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:47:03.944412 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:47:03.945937 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:47:03.953925 systemd-journald[1198]: Time spent on flushing to /var/log/journal/3f39a85f7ea1463dbc631a5052e33466 is 29.193ms for 1046 entries. Sep 12 17:47:03.953925 systemd-journald[1198]: System Journal (/var/log/journal/3f39a85f7ea1463dbc631a5052e33466) is 8M, max 195.6M, 187.6M free. Sep 12 17:47:04.006990 systemd-journald[1198]: Received client request to flush runtime journal. Sep 12 17:47:04.007045 kernel: loop0: detected capacity change from 0 to 224512 Sep 12 17:47:04.011128 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:47:03.951148 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:47:03.955376 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:47:03.960265 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:47:03.966775 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:47:03.972075 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:47:03.987453 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Sep 12 17:47:03.987466 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Sep 12 17:47:03.989489 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:47:03.999367 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:47:04.006266 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:47:04.013859 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:47:04.017643 kernel: loop1: detected capacity change from 0 to 111000 Sep 12 17:47:04.016050 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:47:04.025124 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:47:04.043062 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:47:04.046289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:47:04.048205 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:47:04.051658 kernel: loop2: detected capacity change from 0 to 128016 Sep 12 17:47:04.071598 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Sep 12 17:47:04.071995 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Sep 12 17:47:04.076277 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:47:04.086664 kernel: loop3: detected capacity change from 0 to 224512 Sep 12 17:47:04.094640 kernel: loop4: detected capacity change from 0 to 111000 Sep 12 17:47:04.102672 kernel: loop5: detected capacity change from 0 to 128016 Sep 12 17:47:04.110364 (sd-merge)[1281]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:47:04.110923 (sd-merge)[1281]: Merged extensions into '/usr'. Sep 12 17:47:04.117760 systemd[1]: Reload requested from client PID 1239 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:47:04.117777 systemd[1]: Reloading... Sep 12 17:47:04.443653 zram_generator::config[1307]: No configuration found. Sep 12 17:47:04.577350 ldconfig[1231]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:47:04.645916 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:47:04.645993 systemd[1]: Reloading finished in 527 ms. Sep 12 17:47:04.675818 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:47:04.679261 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:47:04.695126 systemd[1]: Starting ensure-sysext.service... Sep 12 17:47:04.697285 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:47:04.708566 systemd[1]: Reload requested from client PID 1344 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:47:04.708582 systemd[1]: Reloading... Sep 12 17:47:04.733054 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:47:04.733570 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:47:04.734075 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:47:04.734434 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:47:04.735930 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:47:04.736379 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Sep 12 17:47:04.736560 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Sep 12 17:47:04.741650 systemd-tmpfiles[1345]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:47:04.741737 systemd-tmpfiles[1345]: Skipping /boot Sep 12 17:47:04.762012 systemd-tmpfiles[1345]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:47:04.762132 systemd-tmpfiles[1345]: Skipping /boot Sep 12 17:47:04.781657 zram_generator::config[1372]: No configuration found. Sep 12 17:47:04.957101 systemd[1]: Reloading finished in 248 ms. Sep 12 17:47:04.973347 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:47:05.002214 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:47:05.010591 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:47:05.013476 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:47:05.023379 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:47:05.027447 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:47:05.030144 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:47:05.032859 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:47:05.036925 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:05.037825 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:05.045478 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:47:05.049802 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:47:05.059346 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:47:05.060527 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:05.060654 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:05.060780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:05.065640 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:47:05.068185 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:47:05.068797 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:47:05.071130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:47:05.071470 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:47:05.073316 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:47:05.073539 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:47:05.087255 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:47:05.094778 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:05.095371 augenrules[1443]: No rules Sep 12 17:47:05.095250 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:05.097094 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Sep 12 17:47:05.097114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:47:05.102872 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:47:05.115810 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:47:05.118437 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:47:05.119713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:05.119877 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:05.121459 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:47:05.125411 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:47:05.126575 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:05.128362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:47:05.130703 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:47:05.131056 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:47:05.133926 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:47:05.136365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:47:05.136651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:47:05.138344 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:47:05.138641 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:47:05.141788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:47:05.141995 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:47:05.143610 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:47:05.144084 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:47:05.145829 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:47:05.157135 systemd[1]: Finished ensure-sysext.service. Sep 12 17:47:05.169715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:47:05.170721 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:47:05.170790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:47:05.172685 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:47:05.173924 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:47:05.224137 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:47:05.231405 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:47:05.381870 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:47:05.384552 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:47:05.385980 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 17:47:05.393642 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:47:05.395640 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:47:05.415045 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:47:05.423697 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 17:47:05.424078 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:47:05.424271 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:47:05.462372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:05.507380 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:05.508349 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:05.516895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:05.540416 kernel: kvm_amd: TSC scaling supported Sep 12 17:47:05.540465 kernel: kvm_amd: Nested Virtualization enabled Sep 12 17:47:05.540504 kernel: kvm_amd: Nested Paging enabled Sep 12 17:47:05.540516 kernel: kvm_amd: LBR virtualization supported Sep 12 17:47:05.541566 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 17:47:05.541603 kernel: kvm_amd: Virtual GIF supported Sep 12 17:47:05.557239 systemd-networkd[1489]: lo: Link UP Sep 12 17:47:05.557249 systemd-networkd[1489]: lo: Gained carrier Sep 12 17:47:05.558990 systemd-networkd[1489]: Enumeration completed Sep 12 17:47:05.559109 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:47:05.559969 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:05.559975 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:47:05.561845 systemd-networkd[1489]: eth0: Link UP Sep 12 17:47:05.562028 systemd-networkd[1489]: eth0: Gained carrier Sep 12 17:47:05.562042 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:05.563918 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:47:05.568677 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:47:05.574676 systemd-networkd[1489]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:47:05.597059 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:47:05.624648 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:47:05.628879 systemd-resolved[1413]: Positive Trust Anchors: Sep 12 17:47:05.628894 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:47:05.628924 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:47:05.638852 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:47:05.639015 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:47:05.640104 systemd-resolved[1413]: Defaulting to hostname 'linux'. Sep 12 17:47:06.405671 systemd-timesyncd[1490]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:47:06.405949 systemd-timesyncd[1490]: Initial clock synchronization to Fri 2025-09-12 17:47:06.405499 UTC. Sep 12 17:47:06.406461 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:47:06.406575 systemd[1]: Reached target network.target - Network. Sep 12 17:47:06.406866 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:47:06.407230 systemd-resolved[1413]: Clock change detected. Flushing caches. Sep 12 17:47:06.411787 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:06.413146 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:47:06.414323 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:47:06.415597 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:47:06.416912 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 17:47:06.418436 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:47:06.419701 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:47:06.420937 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:47:06.422179 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:47:06.422232 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:47:06.423142 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:47:06.425201 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:47:06.428097 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:47:06.431760 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:47:06.433253 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:47:06.434473 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:47:06.438415 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:47:06.439890 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:47:06.441866 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:47:06.443848 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:47:06.444800 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:47:06.445759 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:47:06.445798 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:47:06.447071 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:47:06.449401 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:47:06.451498 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:47:06.453845 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:47:06.456015 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:47:06.457001 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:47:06.467709 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 17:47:06.471341 jq[1543]: false Sep 12 17:47:06.472560 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:47:06.476214 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:47:06.479522 oslogin_cache_refresh[1545]: Refreshing passwd entry cache Sep 12 17:47:06.480695 extend-filesystems[1544]: Found /dev/vda6 Sep 12 17:47:06.482036 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Refreshing passwd entry cache Sep 12 17:47:06.479129 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:47:06.483590 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:47:06.486824 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Failure getting users, quitting Sep 12 17:47:06.486824 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:47:06.486824 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Refreshing group entry cache Sep 12 17:47:06.486357 oslogin_cache_refresh[1545]: Failure getting users, quitting Sep 12 17:47:06.486397 oslogin_cache_refresh[1545]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:47:06.486451 oslogin_cache_refresh[1545]: Refreshing group entry cache Sep 12 17:47:06.487727 extend-filesystems[1544]: Found /dev/vda9 Sep 12 17:47:06.489930 extend-filesystems[1544]: Checking size of /dev/vda9 Sep 12 17:47:06.491535 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:47:06.494557 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Failure getting groups, quitting Sep 12 17:47:06.494557 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:47:06.493886 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:47:06.493300 oslogin_cache_refresh[1545]: Failure getting groups, quitting Sep 12 17:47:06.494425 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:47:06.493310 oslogin_cache_refresh[1545]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:47:06.494982 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:47:06.496825 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:47:06.501914 extend-filesystems[1544]: Resized partition /dev/vda9 Sep 12 17:47:06.503693 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:47:06.505640 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:47:06.505949 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:47:06.506328 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 17:47:06.507475 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 17:47:06.508049 extend-filesystems[1569]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:47:06.510670 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:47:06.510975 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:47:06.513408 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:47:06.514305 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:47:06.514691 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:47:06.518738 jq[1564]: true Sep 12 17:47:06.534675 update_engine[1563]: I20250912 17:47:06.534469 1563 main.cc:92] Flatcar Update Engine starting Sep 12 17:47:06.536406 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:47:06.538823 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:47:06.563815 jq[1577]: true Sep 12 17:47:06.569403 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:47:06.569403 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:47:06.569403 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:47:06.569147 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:47:06.571944 extend-filesystems[1544]: Resized filesystem in /dev/vda9 Sep 12 17:47:06.571502 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:47:06.577624 tar[1571]: linux-amd64/LICENSE Sep 12 17:47:06.578487 tar[1571]: linux-amd64/helm Sep 12 17:47:06.587688 dbus-daemon[1541]: [system] SELinux support is enabled Sep 12 17:47:06.589290 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:47:06.593819 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:47:06.593843 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:47:06.595302 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:47:06.595320 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:47:06.597305 update_engine[1563]: I20250912 17:47:06.597249 1563 update_check_scheduler.cc:74] Next update check in 2m30s Sep 12 17:47:06.597453 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:47:06.599816 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:47:06.601364 systemd-logind[1557]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 17:47:06.601413 systemd-logind[1557]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:47:06.601818 systemd-logind[1557]: New seat seat0. Sep 12 17:47:06.605597 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:47:06.627074 bash[1605]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:47:06.629527 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:47:06.632682 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:47:06.663574 locksmithd[1604]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:47:06.877354 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:47:06.909347 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:47:06.942336 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:47:06.962062 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:47:06.962437 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:47:06.967644 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:47:06.979593 containerd[1572]: time="2025-09-12T17:47:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:47:06.981108 containerd[1572]: time="2025-09-12T17:47:06.981078133Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:47:07.017014 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:47:07.023725 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:47:07.024401 containerd[1572]: time="2025-09-12T17:47:07.024274056Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.226µs" Sep 12 17:47:07.024440 containerd[1572]: time="2025-09-12T17:47:07.024408878Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:47:07.024478 containerd[1572]: time="2025-09-12T17:47:07.024453131Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:47:07.024864 containerd[1572]: time="2025-09-12T17:47:07.024747443Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:47:07.025078 containerd[1572]: time="2025-09-12T17:47:07.025057364Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:47:07.025173 containerd[1572]: time="2025-09-12T17:47:07.025159436Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:47:07.025357 containerd[1572]: time="2025-09-12T17:47:07.025320979Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:47:07.025357 containerd[1572]: time="2025-09-12T17:47:07.025337690Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:47:07.025699 containerd[1572]: time="2025-09-12T17:47:07.025663701Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:47:07.025699 containerd[1572]: time="2025-09-12T17:47:07.025682346Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:47:07.025699 containerd[1572]: time="2025-09-12T17:47:07.025692545Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:47:07.025699 containerd[1572]: time="2025-09-12T17:47:07.025700731Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:47:07.025822 containerd[1572]: time="2025-09-12T17:47:07.025803293Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:47:07.028528 containerd[1572]: time="2025-09-12T17:47:07.026048382Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:47:07.028528 containerd[1572]: time="2025-09-12T17:47:07.026087897Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:47:07.028528 containerd[1572]: time="2025-09-12T17:47:07.026098617Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:47:07.028528 containerd[1572]: time="2025-09-12T17:47:07.026136688Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:47:07.028528 containerd[1572]: time="2025-09-12T17:47:07.026339859Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:47:07.028528 containerd[1572]: time="2025-09-12T17:47:07.026425189Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:47:07.026490 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:47:07.027835 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:47:07.034891 containerd[1572]: time="2025-09-12T17:47:07.034846729Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:47:07.034957 containerd[1572]: time="2025-09-12T17:47:07.034900821Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:47:07.034957 containerd[1572]: time="2025-09-12T17:47:07.034918324Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:47:07.034957 containerd[1572]: time="2025-09-12T17:47:07.034944633Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:47:07.034957 containerd[1572]: time="2025-09-12T17:47:07.034958579Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:47:07.035078 containerd[1572]: time="2025-09-12T17:47:07.034971874Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:47:07.035078 containerd[1572]: time="2025-09-12T17:47:07.034988084Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:47:07.035078 containerd[1572]: time="2025-09-12T17:47:07.035001369Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:47:07.035078 containerd[1572]: time="2025-09-12T17:47:07.035013893Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:47:07.035078 containerd[1572]: time="2025-09-12T17:47:07.035026016Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:47:07.035078 containerd[1572]: time="2025-09-12T17:47:07.035035042Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:47:07.035078 containerd[1572]: time="2025-09-12T17:47:07.035049349Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:47:07.035247 containerd[1572]: time="2025-09-12T17:47:07.035182729Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:47:07.035247 containerd[1572]: time="2025-09-12T17:47:07.035206544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:47:07.035247 containerd[1572]: time="2025-09-12T17:47:07.035232272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:47:07.035247 containerd[1572]: time="2025-09-12T17:47:07.035246349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:47:07.035352 containerd[1572]: time="2025-09-12T17:47:07.035259173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:47:07.035352 containerd[1572]: time="2025-09-12T17:47:07.035271786Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:47:07.035352 containerd[1572]: time="2025-09-12T17:47:07.035285141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:47:07.035352 containerd[1572]: time="2025-09-12T17:47:07.035297254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:47:07.035352 containerd[1572]: time="2025-09-12T17:47:07.035322531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:47:07.035352 containerd[1572]: time="2025-09-12T17:47:07.035347849Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:47:07.035555 containerd[1572]: time="2025-09-12T17:47:07.035360613Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:47:07.035555 containerd[1572]: time="2025-09-12T17:47:07.035519791Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:47:07.035555 containerd[1572]: time="2025-09-12T17:47:07.035547193Z" level=info msg="Start snapshots syncer" Sep 12 17:47:07.035629 containerd[1572]: time="2025-09-12T17:47:07.035596014Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:47:07.036028 containerd[1572]: time="2025-09-12T17:47:07.035960778Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:47:07.036142 containerd[1572]: time="2025-09-12T17:47:07.036041840Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:47:07.039218 containerd[1572]: time="2025-09-12T17:47:07.039177800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:47:07.039344 containerd[1572]: time="2025-09-12T17:47:07.039309567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:47:07.039380 containerd[1572]: time="2025-09-12T17:47:07.039360423Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:47:07.039380 containerd[1572]: time="2025-09-12T17:47:07.039375300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:47:07.039461 containerd[1572]: time="2025-09-12T17:47:07.039405066Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:47:07.039461 containerd[1572]: time="2025-09-12T17:47:07.039419012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:47:07.039461 containerd[1572]: time="2025-09-12T17:47:07.039429492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:47:07.039461 containerd[1572]: time="2025-09-12T17:47:07.039450872Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:47:07.039569 containerd[1572]: time="2025-09-12T17:47:07.039473074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:47:07.039569 containerd[1572]: time="2025-09-12T17:47:07.039483924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:47:07.039569 containerd[1572]: time="2025-09-12T17:47:07.039515794Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:47:07.039569 containerd[1572]: time="2025-09-12T17:47:07.039558975Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:47:07.039569 containerd[1572]: time="2025-09-12T17:47:07.039572150Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:47:07.039700 containerd[1572]: time="2025-09-12T17:47:07.039580766Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:47:07.039700 containerd[1572]: time="2025-09-12T17:47:07.039590043Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:47:07.039700 containerd[1572]: time="2025-09-12T17:47:07.039597317Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:47:07.039700 containerd[1572]: time="2025-09-12T17:47:07.039606053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:47:07.039700 containerd[1572]: time="2025-09-12T17:47:07.039620160Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:47:07.039700 containerd[1572]: time="2025-09-12T17:47:07.039637091Z" level=info msg="runtime interface created" Sep 12 17:47:07.039700 containerd[1572]: time="2025-09-12T17:47:07.039642311Z" level=info msg="created NRI interface" Sep 12 17:47:07.039700 containerd[1572]: time="2025-09-12T17:47:07.039649755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:47:07.039700 containerd[1572]: time="2025-09-12T17:47:07.039674361Z" level=info msg="Connect containerd service" Sep 12 17:47:07.039700 containerd[1572]: time="2025-09-12T17:47:07.039696543Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:47:07.040787 containerd[1572]: time="2025-09-12T17:47:07.040746972Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:47:07.049411 tar[1571]: linux-amd64/README.md Sep 12 17:47:07.069558 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:47:07.233319 containerd[1572]: time="2025-09-12T17:47:07.233182509Z" level=info msg="Start subscribing containerd event" Sep 12 17:47:07.233433 containerd[1572]: time="2025-09-12T17:47:07.233354301Z" level=info msg="Start recovering state" Sep 12 17:47:07.233544 containerd[1572]: time="2025-09-12T17:47:07.233496267Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:47:07.233569 containerd[1572]: time="2025-09-12T17:47:07.233554687Z" level=info msg="Start event monitor" Sep 12 17:47:07.233591 containerd[1572]: time="2025-09-12T17:47:07.233579854Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:47:07.233630 containerd[1572]: time="2025-09-12T17:47:07.233582188Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:47:07.233652 containerd[1572]: time="2025-09-12T17:47:07.233616453Z" level=info msg="Start streaming server" Sep 12 17:47:07.233652 containerd[1572]: time="2025-09-12T17:47:07.233642792Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:47:07.233708 containerd[1572]: time="2025-09-12T17:47:07.233653191Z" level=info msg="runtime interface starting up..." Sep 12 17:47:07.233708 containerd[1572]: time="2025-09-12T17:47:07.233661657Z" level=info msg="starting plugins..." Sep 12 17:47:07.233708 containerd[1572]: time="2025-09-12T17:47:07.233687155Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:47:07.234019 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:47:07.235255 containerd[1572]: time="2025-09-12T17:47:07.235220630Z" level=info msg="containerd successfully booted in 0.256171s" Sep 12 17:47:08.224872 systemd-networkd[1489]: eth0: Gained IPv6LL Sep 12 17:47:08.228142 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:47:08.230129 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:47:08.232837 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:47:08.235404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:47:08.245625 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:47:08.266043 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:47:08.266526 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:47:08.268256 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:47:08.276026 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:47:08.741740 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:47:08.744160 systemd[1]: Started sshd@0-10.0.0.100:22-10.0.0.1:41002.service - OpenSSH per-connection server daemon (10.0.0.1:41002). Sep 12 17:47:08.831478 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 41002 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:08.833592 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:08.840780 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:47:08.842934 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:47:08.850893 systemd-logind[1557]: New session 1 of user core. Sep 12 17:47:08.939775 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:47:08.944851 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:47:08.974670 (systemd)[1678]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:47:08.977571 systemd-logind[1557]: New session c1 of user core. Sep 12 17:47:09.205481 systemd[1678]: Queued start job for default target default.target. Sep 12 17:47:09.302983 systemd[1678]: Created slice app.slice - User Application Slice. Sep 12 17:47:09.303014 systemd[1678]: Reached target paths.target - Paths. Sep 12 17:47:09.303062 systemd[1678]: Reached target timers.target - Timers. Sep 12 17:47:09.304723 systemd[1678]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:47:09.318787 systemd[1678]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:47:09.318942 systemd[1678]: Reached target sockets.target - Sockets. Sep 12 17:47:09.318989 systemd[1678]: Reached target basic.target - Basic System. Sep 12 17:47:09.319037 systemd[1678]: Reached target default.target - Main User Target. Sep 12 17:47:09.319079 systemd[1678]: Startup finished in 332ms. Sep 12 17:47:09.319411 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:47:09.322330 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:47:09.390776 systemd[1]: Started sshd@1-10.0.0.100:22-10.0.0.1:41010.service - OpenSSH per-connection server daemon (10.0.0.1:41010). Sep 12 17:47:09.463750 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 41010 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:09.517644 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:09.522474 systemd-logind[1557]: New session 2 of user core. Sep 12 17:47:09.529524 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:47:09.614214 sshd[1692]: Connection closed by 10.0.0.1 port 41010 Sep 12 17:47:09.614769 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:09.627007 systemd[1]: sshd@1-10.0.0.100:22-10.0.0.1:41010.service: Deactivated successfully. Sep 12 17:47:09.628867 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:47:09.629723 systemd-logind[1557]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:47:09.632529 systemd[1]: Started sshd@2-10.0.0.100:22-10.0.0.1:41012.service - OpenSSH per-connection server daemon (10.0.0.1:41012). Sep 12 17:47:09.635064 systemd-logind[1557]: Removed session 2. Sep 12 17:47:09.760331 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 41012 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:09.762295 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:09.767253 systemd-logind[1557]: New session 3 of user core. Sep 12 17:47:09.778536 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:47:09.836858 sshd[1701]: Connection closed by 10.0.0.1 port 41012 Sep 12 17:47:09.837219 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:09.842032 systemd[1]: sshd@2-10.0.0.100:22-10.0.0.1:41012.service: Deactivated successfully. Sep 12 17:47:09.844535 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:47:09.845287 systemd-logind[1557]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:47:09.847318 systemd-logind[1557]: Removed session 3. Sep 12 17:47:09.974132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:47:09.976017 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:47:09.978194 systemd[1]: Startup finished in 2.884s (kernel) + 9.333s (initrd) + 6.301s (userspace) = 18.519s. Sep 12 17:47:09.990878 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:47:10.764406 kubelet[1711]: E0912 17:47:10.764322 1711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:47:10.768658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:47:10.768888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:47:10.769289 systemd[1]: kubelet.service: Consumed 2.325s CPU time, 264.8M memory peak. Sep 12 17:47:19.852768 systemd[1]: Started sshd@3-10.0.0.100:22-10.0.0.1:53714.service - OpenSSH per-connection server daemon (10.0.0.1:53714). Sep 12 17:47:19.918544 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 53714 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:19.920309 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:19.924958 systemd-logind[1557]: New session 4 of user core. Sep 12 17:47:19.934561 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:47:19.988796 sshd[1727]: Connection closed by 10.0.0.1 port 53714 Sep 12 17:47:19.989147 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:20.001762 systemd[1]: sshd@3-10.0.0.100:22-10.0.0.1:53714.service: Deactivated successfully. Sep 12 17:47:20.003571 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:47:20.004342 systemd-logind[1557]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:47:20.006840 systemd[1]: Started sshd@4-10.0.0.100:22-10.0.0.1:60068.service - OpenSSH per-connection server daemon (10.0.0.1:60068). Sep 12 17:47:20.007606 systemd-logind[1557]: Removed session 4. Sep 12 17:47:20.063693 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 60068 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:20.065295 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:20.069648 systemd-logind[1557]: New session 5 of user core. Sep 12 17:47:20.083514 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:47:20.132520 sshd[1736]: Connection closed by 10.0.0.1 port 60068 Sep 12 17:47:20.132862 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:20.141818 systemd[1]: sshd@4-10.0.0.100:22-10.0.0.1:60068.service: Deactivated successfully. Sep 12 17:47:20.143513 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:47:20.144202 systemd-logind[1557]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:47:20.146892 systemd[1]: Started sshd@5-10.0.0.100:22-10.0.0.1:60080.service - OpenSSH per-connection server daemon (10.0.0.1:60080). Sep 12 17:47:20.147448 systemd-logind[1557]: Removed session 5. Sep 12 17:47:20.211263 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 60080 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:20.213819 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:20.219363 systemd-logind[1557]: New session 6 of user core. Sep 12 17:47:20.231519 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:47:20.285224 sshd[1745]: Connection closed by 10.0.0.1 port 60080 Sep 12 17:47:20.285641 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:20.293893 systemd[1]: sshd@5-10.0.0.100:22-10.0.0.1:60080.service: Deactivated successfully. Sep 12 17:47:20.295664 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:47:20.296524 systemd-logind[1557]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:47:20.299348 systemd[1]: Started sshd@6-10.0.0.100:22-10.0.0.1:60082.service - OpenSSH per-connection server daemon (10.0.0.1:60082). Sep 12 17:47:20.300004 systemd-logind[1557]: Removed session 6. Sep 12 17:47:20.360134 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 60082 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:20.362063 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:20.366566 systemd-logind[1557]: New session 7 of user core. Sep 12 17:47:20.380619 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:47:20.439804 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:47:20.440120 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:47:20.461247 sudo[1755]: pam_unix(sudo:session): session closed for user root Sep 12 17:47:20.463062 sshd[1754]: Connection closed by 10.0.0.1 port 60082 Sep 12 17:47:20.463468 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:20.487283 systemd[1]: sshd@6-10.0.0.100:22-10.0.0.1:60082.service: Deactivated successfully. Sep 12 17:47:20.489617 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:47:20.490487 systemd-logind[1557]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:47:20.493744 systemd[1]: Started sshd@7-10.0.0.100:22-10.0.0.1:60098.service - OpenSSH per-connection server daemon (10.0.0.1:60098). Sep 12 17:47:20.494507 systemd-logind[1557]: Removed session 7. Sep 12 17:47:20.564306 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 60098 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:20.566238 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:20.571284 systemd-logind[1557]: New session 8 of user core. Sep 12 17:47:20.584559 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:47:20.638628 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:47:20.638920 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:47:20.645168 sudo[1766]: pam_unix(sudo:session): session closed for user root Sep 12 17:47:20.650938 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:47:20.651233 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:47:20.661649 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:47:20.704258 augenrules[1788]: No rules Sep 12 17:47:20.705919 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:47:20.706221 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:47:20.707363 sudo[1765]: pam_unix(sudo:session): session closed for user root Sep 12 17:47:20.709581 sshd[1764]: Connection closed by 10.0.0.1 port 60098 Sep 12 17:47:20.709819 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:20.723077 systemd[1]: sshd@7-10.0.0.100:22-10.0.0.1:60098.service: Deactivated successfully. Sep 12 17:47:20.725291 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:47:20.726082 systemd-logind[1557]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:47:20.729123 systemd[1]: Started sshd@8-10.0.0.100:22-10.0.0.1:60114.service - OpenSSH per-connection server daemon (10.0.0.1:60114). Sep 12 17:47:20.729844 systemd-logind[1557]: Removed session 8. Sep 12 17:47:20.774353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:47:20.775936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:47:20.790399 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 60114 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:20.791880 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:20.796122 systemd-logind[1557]: New session 9 of user core. Sep 12 17:47:20.798616 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:47:20.852653 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:47:20.853017 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:47:21.097876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:47:21.116966 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:47:21.280413 kubelet[1819]: E0912 17:47:21.280320 1819 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:47:21.288475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:47:21.288663 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:47:21.289029 systemd[1]: kubelet.service: Consumed 465ms CPU time, 111.3M memory peak. Sep 12 17:47:21.784886 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:47:21.805903 (dockerd)[1839]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:47:22.393042 dockerd[1839]: time="2025-09-12T17:47:22.392957593Z" level=info msg="Starting up" Sep 12 17:47:22.397994 dockerd[1839]: time="2025-09-12T17:47:22.397902595Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:47:22.410097 dockerd[1839]: time="2025-09-12T17:47:22.410041055Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:47:22.868405 dockerd[1839]: time="2025-09-12T17:47:22.868316241Z" level=info msg="Loading containers: start." Sep 12 17:47:22.999415 kernel: Initializing XFRM netlink socket Sep 12 17:47:23.430070 systemd-networkd[1489]: docker0: Link UP Sep 12 17:47:23.508883 dockerd[1839]: time="2025-09-12T17:47:23.508823662Z" level=info msg="Loading containers: done." Sep 12 17:47:23.524789 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck518801668-merged.mount: Deactivated successfully. Sep 12 17:47:23.768843 dockerd[1839]: time="2025-09-12T17:47:23.768700831Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:47:23.769110 dockerd[1839]: time="2025-09-12T17:47:23.769064012Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:47:23.769273 dockerd[1839]: time="2025-09-12T17:47:23.769232037Z" level=info msg="Initializing buildkit" Sep 12 17:47:24.018157 dockerd[1839]: time="2025-09-12T17:47:24.018061523Z" level=info msg="Completed buildkit initialization" Sep 12 17:47:24.023742 dockerd[1839]: time="2025-09-12T17:47:24.023653548Z" level=info msg="Daemon has completed initialization" Sep 12 17:47:24.023819 dockerd[1839]: time="2025-09-12T17:47:24.023746272Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:47:24.023959 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:47:25.033532 containerd[1572]: time="2025-09-12T17:47:25.033472047Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:47:26.985932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3178362844.mount: Deactivated successfully. Sep 12 17:47:29.342564 containerd[1572]: time="2025-09-12T17:47:29.342492509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:29.343195 containerd[1572]: time="2025-09-12T17:47:29.343138380Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 17:47:29.344276 containerd[1572]: time="2025-09-12T17:47:29.344225258Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:29.347609 containerd[1572]: time="2025-09-12T17:47:29.347560091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:29.348555 containerd[1572]: time="2025-09-12T17:47:29.348527385Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 4.314988823s" Sep 12 17:47:29.348555 containerd[1572]: time="2025-09-12T17:47:29.348561639Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 17:47:29.349421 containerd[1572]: time="2025-09-12T17:47:29.349341411Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:47:31.149400 containerd[1572]: time="2025-09-12T17:47:31.149330159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:31.150184 containerd[1572]: time="2025-09-12T17:47:31.150135780Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 17:47:31.151266 containerd[1572]: time="2025-09-12T17:47:31.151236253Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:31.154348 containerd[1572]: time="2025-09-12T17:47:31.154319504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:31.155256 containerd[1572]: time="2025-09-12T17:47:31.155225363Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.805846261s" Sep 12 17:47:31.155307 containerd[1572]: time="2025-09-12T17:47:31.155256612Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 17:47:31.155762 containerd[1572]: time="2025-09-12T17:47:31.155744156Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:47:31.323167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:47:31.325291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:47:31.527419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:47:31.542647 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:47:31.716290 kubelet[2129]: E0912 17:47:31.716205 2129 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:47:31.720420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:47:31.720619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:47:31.721001 systemd[1]: kubelet.service: Consumed 368ms CPU time, 110.7M memory peak. Sep 12 17:47:34.483638 containerd[1572]: time="2025-09-12T17:47:34.483556513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:34.485209 containerd[1572]: time="2025-09-12T17:47:34.485147175Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 17:47:34.489049 containerd[1572]: time="2025-09-12T17:47:34.489020638Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:34.495119 containerd[1572]: time="2025-09-12T17:47:34.495059461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:34.496070 containerd[1572]: time="2025-09-12T17:47:34.496032926Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 3.340263173s" Sep 12 17:47:34.496070 containerd[1572]: time="2025-09-12T17:47:34.496068413Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 17:47:34.496651 containerd[1572]: time="2025-09-12T17:47:34.496622061Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:47:35.863547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678410368.mount: Deactivated successfully. Sep 12 17:47:36.309177 containerd[1572]: time="2025-09-12T17:47:36.309044212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:36.310301 containerd[1572]: time="2025-09-12T17:47:36.310254581Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 17:47:36.311992 containerd[1572]: time="2025-09-12T17:47:36.311950732Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:36.314465 containerd[1572]: time="2025-09-12T17:47:36.314430912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:36.315012 containerd[1572]: time="2025-09-12T17:47:36.314975744Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.818322293s" Sep 12 17:47:36.315063 containerd[1572]: time="2025-09-12T17:47:36.315008445Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 17:47:36.315529 containerd[1572]: time="2025-09-12T17:47:36.315506990Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:47:37.293933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3766363935.mount: Deactivated successfully. Sep 12 17:47:39.216352 containerd[1572]: time="2025-09-12T17:47:39.216272784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:39.218571 containerd[1572]: time="2025-09-12T17:47:39.218542027Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:47:39.220218 containerd[1572]: time="2025-09-12T17:47:39.220186161Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:39.224633 containerd[1572]: time="2025-09-12T17:47:39.224593718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:39.225511 containerd[1572]: time="2025-09-12T17:47:39.225483765Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.909952069s" Sep 12 17:47:39.225565 containerd[1572]: time="2025-09-12T17:47:39.225513673Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:47:39.226036 containerd[1572]: time="2025-09-12T17:47:39.226007369Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:47:39.812014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2701939226.mount: Deactivated successfully. Sep 12 17:47:39.819153 containerd[1572]: time="2025-09-12T17:47:39.819096028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:47:39.820561 containerd[1572]: time="2025-09-12T17:47:39.819849113Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:47:39.822118 containerd[1572]: time="2025-09-12T17:47:39.822089061Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:47:39.825158 containerd[1572]: time="2025-09-12T17:47:39.825105227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:47:39.825606 containerd[1572]: time="2025-09-12T17:47:39.825577693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 599.535958ms" Sep 12 17:47:39.825647 containerd[1572]: time="2025-09-12T17:47:39.825611137Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:47:39.826152 containerd[1572]: time="2025-09-12T17:47:39.826119573Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:47:40.335575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158279856.mount: Deactivated successfully. Sep 12 17:47:41.823179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:47:41.825076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:47:42.271122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:47:42.278688 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:47:44.294357 kubelet[2267]: E0912 17:47:44.294289 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:47:44.298802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:47:44.299041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:47:44.299499 systemd[1]: kubelet.service: Consumed 256ms CPU time, 110.7M memory peak. Sep 12 17:47:45.109464 containerd[1572]: time="2025-09-12T17:47:45.109378218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:45.111706 containerd[1572]: time="2025-09-12T17:47:45.111644114Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 17:47:45.112998 containerd[1572]: time="2025-09-12T17:47:45.112964890Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:45.116208 containerd[1572]: time="2025-09-12T17:47:45.116162630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:45.117338 containerd[1572]: time="2025-09-12T17:47:45.117263577Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.291092405s" Sep 12 17:47:45.117439 containerd[1572]: time="2025-09-12T17:47:45.117346865Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 17:47:48.470880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:47:48.471075 systemd[1]: kubelet.service: Consumed 256ms CPU time, 110.7M memory peak. Sep 12 17:47:48.473423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:47:48.519377 systemd[1]: Reload requested from client PID 2306 ('systemctl') (unit session-9.scope)... Sep 12 17:47:48.519421 systemd[1]: Reloading... Sep 12 17:47:48.621421 zram_generator::config[2352]: No configuration found. Sep 12 17:47:49.159415 systemd[1]: Reloading finished in 639 ms. Sep 12 17:47:49.232579 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:47:49.232691 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:47:49.233045 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:47:49.233120 systemd[1]: kubelet.service: Consumed 165ms CPU time, 98.3M memory peak. Sep 12 17:47:49.234891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:47:49.411124 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:47:49.421727 (kubelet)[2397]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:47:49.471738 kubelet[2397]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:47:49.471738 kubelet[2397]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:47:49.471738 kubelet[2397]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:47:49.472165 kubelet[2397]: I0912 17:47:49.471810 2397 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:47:49.810063 kubelet[2397]: I0912 17:47:49.809929 2397 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:47:49.810063 kubelet[2397]: I0912 17:47:49.809960 2397 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:47:49.810233 kubelet[2397]: I0912 17:47:49.810217 2397 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:47:49.841638 kubelet[2397]: E0912 17:47:49.841595 2397 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:47:49.844233 kubelet[2397]: I0912 17:47:49.844179 2397 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:47:49.852610 kubelet[2397]: I0912 17:47:49.852591 2397 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:47:49.858346 kubelet[2397]: I0912 17:47:49.858008 2397 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:47:49.859477 kubelet[2397]: I0912 17:47:49.859442 2397 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:47:49.860060 kubelet[2397]: I0912 17:47:49.859542 2397 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:47:49.860164 kubelet[2397]: I0912 17:47:49.860073 2397 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:47:49.860164 kubelet[2397]: I0912 17:47:49.860084 2397 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:47:49.860253 kubelet[2397]: I0912 17:47:49.860239 2397 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:47:49.862992 kubelet[2397]: I0912 17:47:49.862970 2397 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:47:49.863042 kubelet[2397]: I0912 17:47:49.862998 2397 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:47:49.863042 kubelet[2397]: I0912 17:47:49.863033 2397 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:47:49.863094 kubelet[2397]: I0912 17:47:49.863047 2397 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:47:49.866146 kubelet[2397]: I0912 17:47:49.865684 2397 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:47:49.866146 kubelet[2397]: I0912 17:47:49.866033 2397 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:47:49.866146 kubelet[2397]: W0912 17:47:49.866082 2397 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 17:47:49.866146 kubelet[2397]: W0912 17:47:49.866080 2397 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 17:47:49.866146 kubelet[2397]: E0912 17:47:49.866130 2397 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:47:49.866273 kubelet[2397]: E0912 17:47:49.866164 2397 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:47:49.866828 kubelet[2397]: W0912 17:47:49.866803 2397 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:47:49.868823 kubelet[2397]: I0912 17:47:49.868800 2397 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:47:49.868878 kubelet[2397]: I0912 17:47:49.868836 2397 server.go:1287] "Started kubelet" Sep 12 17:47:49.870401 kubelet[2397]: I0912 17:47:49.869834 2397 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:47:49.870807 kubelet[2397]: I0912 17:47:49.870791 2397 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:47:49.872422 kubelet[2397]: I0912 17:47:49.872150 2397 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:47:49.872516 kubelet[2397]: I0912 17:47:49.872493 2397 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:47:49.873433 kubelet[2397]: I0912 17:47:49.872939 2397 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:47:49.873433 kubelet[2397]: I0912 17:47:49.873024 2397 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:47:49.873433 kubelet[2397]: I0912 17:47:49.873097 2397 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:47:49.873533 kubelet[2397]: E0912 17:47:49.873443 2397 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:47:49.874081 kubelet[2397]: I0912 17:47:49.873846 2397 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:47:49.874081 kubelet[2397]: I0912 17:47:49.873872 2397 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:47:49.874952 kubelet[2397]: E0912 17:47:49.874584 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="200ms" Sep 12 17:47:49.874952 kubelet[2397]: W0912 17:47:49.874666 2397 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 17:47:49.874952 kubelet[2397]: E0912 17:47:49.874705 2397 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:47:49.874952 kubelet[2397]: E0912 17:47:49.874890 2397 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:47:49.875079 kubelet[2397]: I0912 17:47:49.874975 2397 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:47:49.875079 kubelet[2397]: I0912 17:47:49.875046 2397 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:47:49.876427 kubelet[2397]: I0912 17:47:49.876072 2397 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:47:49.876427 kubelet[2397]: E0912 17:47:49.875365 2397 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18649a2c5a7e7c54 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:47:49.868813396 +0000 UTC m=+0.443074903,LastTimestamp:2025-09-12 17:47:49.868813396 +0000 UTC m=+0.443074903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:47:49.891092 kubelet[2397]: I0912 17:47:49.891052 2397 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:47:49.891092 kubelet[2397]: I0912 17:47:49.891072 2397 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:47:49.891092 kubelet[2397]: I0912 17:47:49.891092 2397 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:47:49.894110 kubelet[2397]: I0912 17:47:49.894084 2397 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:47:49.895459 kubelet[2397]: I0912 17:47:49.895431 2397 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:47:49.895500 kubelet[2397]: I0912 17:47:49.895469 2397 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:47:49.895500 kubelet[2397]: I0912 17:47:49.895496 2397 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:47:49.895541 kubelet[2397]: I0912 17:47:49.895503 2397 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:47:49.895579 kubelet[2397]: E0912 17:47:49.895559 2397 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:47:49.896408 kubelet[2397]: W0912 17:47:49.896010 2397 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 17:47:49.896408 kubelet[2397]: E0912 17:47:49.896055 2397 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:47:49.974123 kubelet[2397]: E0912 17:47:49.974077 2397 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:47:49.996464 kubelet[2397]: E0912 17:47:49.996419 2397 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:47:50.074772 kubelet[2397]: E0912 17:47:50.074699 2397 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:47:50.075083 kubelet[2397]: E0912 17:47:50.075049 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="400ms" Sep 12 17:47:50.175478 kubelet[2397]: E0912 17:47:50.175433 2397 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:47:50.196717 kubelet[2397]: E0912 17:47:50.196670 2397 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:47:50.232729 kubelet[2397]: I0912 17:47:50.232692 2397 policy_none.go:49] "None policy: Start" Sep 12 17:47:50.232817 kubelet[2397]: I0912 17:47:50.232748 2397 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:47:50.232817 kubelet[2397]: I0912 17:47:50.232773 2397 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:47:50.238407 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:47:50.252614 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:47:50.255625 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:47:50.269374 kubelet[2397]: I0912 17:47:50.269326 2397 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:47:50.269813 kubelet[2397]: I0912 17:47:50.269632 2397 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:47:50.269813 kubelet[2397]: I0912 17:47:50.269651 2397 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:47:50.269992 kubelet[2397]: I0912 17:47:50.269950 2397 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:47:50.270990 kubelet[2397]: E0912 17:47:50.270970 2397 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:47:50.271064 kubelet[2397]: E0912 17:47:50.271015 2397 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:47:50.371521 kubelet[2397]: I0912 17:47:50.371400 2397 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:47:50.371857 kubelet[2397]: E0912 17:47:50.371805 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 12 17:47:50.476116 kubelet[2397]: E0912 17:47:50.476051 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="800ms" Sep 12 17:47:50.574275 kubelet[2397]: I0912 17:47:50.574241 2397 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:47:50.574686 kubelet[2397]: E0912 17:47:50.574645 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 12 17:47:50.605626 systemd[1]: Created slice kubepods-burstable-pod73cc5708c4731797ccc6b94c3b0e7ed3.slice - libcontainer container kubepods-burstable-pod73cc5708c4731797ccc6b94c3b0e7ed3.slice. Sep 12 17:47:50.639228 kubelet[2397]: E0912 17:47:50.638930 2397 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:47:50.642447 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 17:47:50.644818 kubelet[2397]: E0912 17:47:50.644791 2397 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:47:50.647262 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 17:47:50.649559 kubelet[2397]: E0912 17:47:50.649519 2397 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:47:50.679984 kubelet[2397]: I0912 17:47:50.679920 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73cc5708c4731797ccc6b94c3b0e7ed3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"73cc5708c4731797ccc6b94c3b0e7ed3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:50.679984 kubelet[2397]: I0912 17:47:50.679970 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73cc5708c4731797ccc6b94c3b0e7ed3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"73cc5708c4731797ccc6b94c3b0e7ed3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:50.680146 kubelet[2397]: I0912 17:47:50.679993 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:50.680146 kubelet[2397]: I0912 17:47:50.680019 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:50.680146 kubelet[2397]: I0912 17:47:50.680042 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73cc5708c4731797ccc6b94c3b0e7ed3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"73cc5708c4731797ccc6b94c3b0e7ed3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:50.680146 kubelet[2397]: I0912 17:47:50.680059 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:50.680146 kubelet[2397]: I0912 17:47:50.680103 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:50.680259 kubelet[2397]: I0912 17:47:50.680130 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:50.680259 kubelet[2397]: I0912 17:47:50.680155 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:47:50.815420 kubelet[2397]: W0912 17:47:50.815298 2397 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 17:47:50.815420 kubelet[2397]: E0912 17:47:50.815426 2397 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:47:50.940088 kubelet[2397]: E0912 17:47:50.939965 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:50.940897 containerd[1572]: time="2025-09-12T17:47:50.940856545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:73cc5708c4731797ccc6b94c3b0e7ed3,Namespace:kube-system,Attempt:0,}" Sep 12 17:47:50.946094 kubelet[2397]: E0912 17:47:50.946061 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:50.946547 containerd[1572]: time="2025-09-12T17:47:50.946504836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 17:47:50.950791 kubelet[2397]: E0912 17:47:50.950747 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:50.951151 containerd[1572]: time="2025-09-12T17:47:50.951097566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 17:47:50.976679 kubelet[2397]: I0912 17:47:50.976624 2397 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:47:50.977210 kubelet[2397]: E0912 17:47:50.977162 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 12 17:47:51.075120 kubelet[2397]: W0912 17:47:51.075024 2397 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 17:47:51.075120 kubelet[2397]: E0912 17:47:51.075109 2397 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:47:51.114326 kubelet[2397]: W0912 17:47:51.114239 2397 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 17:47:51.114435 kubelet[2397]: E0912 17:47:51.114328 2397 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:47:51.183525 kubelet[2397]: W0912 17:47:51.183472 2397 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 17:47:51.183525 kubelet[2397]: E0912 17:47:51.183523 2397 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:47:51.277062 kubelet[2397]: E0912 17:47:51.276926 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="1.6s" Sep 12 17:47:51.459751 update_engine[1563]: I20250912 17:47:51.459663 1563 update_attempter.cc:509] Updating boot flags... Sep 12 17:47:51.779157 kubelet[2397]: I0912 17:47:51.779112 2397 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:47:51.779587 kubelet[2397]: E0912 17:47:51.779464 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 12 17:47:51.848754 containerd[1572]: time="2025-09-12T17:47:51.848697807Z" level=info msg="connecting to shim 93810c8c079ca5fe16fd203b15c79609ace378509fb1cb778bf3099e0421dbfc" address="unix:///run/containerd/s/ee68caef2e95dcc01e6e769350155555a5d57f39ada339fb29a81a03e17e564e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:47:51.878006 kubelet[2397]: E0912 17:47:51.877881 2397 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18649a2c5a7e7c54 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:47:49.868813396 +0000 UTC m=+0.443074903,LastTimestamp:2025-09-12 17:47:49.868813396 +0000 UTC m=+0.443074903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:47:51.880529 systemd[1]: Started cri-containerd-93810c8c079ca5fe16fd203b15c79609ace378509fb1cb778bf3099e0421dbfc.scope - libcontainer container 93810c8c079ca5fe16fd203b15c79609ace378509fb1cb778bf3099e0421dbfc. Sep 12 17:47:51.913113 containerd[1572]: time="2025-09-12T17:47:51.913059617Z" level=info msg="connecting to shim 9fadb293780a8196f8cb2979cf82dc0d8f70e8df9d438406a59b168c0d9d073d" address="unix:///run/containerd/s/85ee93ac4699e5788cee1b18c31d88673ff40cd61ec9e61f98a36f8bc5807be0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:47:51.922631 containerd[1572]: time="2025-09-12T17:47:51.922516048Z" level=info msg="connecting to shim f95e004afe9973c72939bfa226a2259b4d64027ae3c5d3d497933965a5e75b01" address="unix:///run/containerd/s/0d9548060fd37a021eacedc093e35fb182831f0025fa272dddb59f3cd1cb5686" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:47:51.926403 kubelet[2397]: E0912 17:47:51.923968 2397 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:47:51.950208 containerd[1572]: time="2025-09-12T17:47:51.950096305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:73cc5708c4731797ccc6b94c3b0e7ed3,Namespace:kube-system,Attempt:0,} returns sandbox id \"93810c8c079ca5fe16fd203b15c79609ace378509fb1cb778bf3099e0421dbfc\"" Sep 12 17:47:51.952145 kubelet[2397]: E0912 17:47:51.952095 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:51.955418 containerd[1572]: time="2025-09-12T17:47:51.955204464Z" level=info msg="CreateContainer within sandbox \"93810c8c079ca5fe16fd203b15c79609ace378509fb1cb778bf3099e0421dbfc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:47:51.966589 systemd[1]: Started cri-containerd-9fadb293780a8196f8cb2979cf82dc0d8f70e8df9d438406a59b168c0d9d073d.scope - libcontainer container 9fadb293780a8196f8cb2979cf82dc0d8f70e8df9d438406a59b168c0d9d073d. Sep 12 17:47:51.971061 systemd[1]: Started cri-containerd-f95e004afe9973c72939bfa226a2259b4d64027ae3c5d3d497933965a5e75b01.scope - libcontainer container f95e004afe9973c72939bfa226a2259b4d64027ae3c5d3d497933965a5e75b01. Sep 12 17:47:51.972192 containerd[1572]: time="2025-09-12T17:47:51.972153083Z" level=info msg="Container 4434c3cdb4d66b482738bfb2a391a8893b3afd73da3124eadad119eb09ea44d3: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:51.978400 containerd[1572]: time="2025-09-12T17:47:51.978340469Z" level=info msg="CreateContainer within sandbox \"93810c8c079ca5fe16fd203b15c79609ace378509fb1cb778bf3099e0421dbfc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4434c3cdb4d66b482738bfb2a391a8893b3afd73da3124eadad119eb09ea44d3\"" Sep 12 17:47:51.979008 containerd[1572]: time="2025-09-12T17:47:51.978989509Z" level=info msg="StartContainer for \"4434c3cdb4d66b482738bfb2a391a8893b3afd73da3124eadad119eb09ea44d3\"" Sep 12 17:47:51.980188 containerd[1572]: time="2025-09-12T17:47:51.980153254Z" level=info msg="connecting to shim 4434c3cdb4d66b482738bfb2a391a8893b3afd73da3124eadad119eb09ea44d3" address="unix:///run/containerd/s/ee68caef2e95dcc01e6e769350155555a5d57f39ada339fb29a81a03e17e564e" protocol=ttrpc version=3 Sep 12 17:47:52.008501 systemd[1]: Started cri-containerd-4434c3cdb4d66b482738bfb2a391a8893b3afd73da3124eadad119eb09ea44d3.scope - libcontainer container 4434c3cdb4d66b482738bfb2a391a8893b3afd73da3124eadad119eb09ea44d3. Sep 12 17:47:52.034520 containerd[1572]: time="2025-09-12T17:47:52.032672613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fadb293780a8196f8cb2979cf82dc0d8f70e8df9d438406a59b168c0d9d073d\"" Sep 12 17:47:52.034630 kubelet[2397]: E0912 17:47:52.033516 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:52.035374 containerd[1572]: time="2025-09-12T17:47:52.035299478Z" level=info msg="CreateContainer within sandbox \"9fadb293780a8196f8cb2979cf82dc0d8f70e8df9d438406a59b168c0d9d073d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:47:52.096175 containerd[1572]: time="2025-09-12T17:47:52.096112606Z" level=info msg="Container 1bc660316dbba7603530d6d6544ec7ca9315380a10a170285ddf938748fae351: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:52.105698 containerd[1572]: time="2025-09-12T17:47:52.105568665Z" level=info msg="CreateContainer within sandbox \"9fadb293780a8196f8cb2979cf82dc0d8f70e8df9d438406a59b168c0d9d073d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1bc660316dbba7603530d6d6544ec7ca9315380a10a170285ddf938748fae351\"" Sep 12 17:47:52.106078 containerd[1572]: time="2025-09-12T17:47:52.106060326Z" level=info msg="StartContainer for \"1bc660316dbba7603530d6d6544ec7ca9315380a10a170285ddf938748fae351\"" Sep 12 17:47:52.112757 containerd[1572]: time="2025-09-12T17:47:52.112721610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f95e004afe9973c72939bfa226a2259b4d64027ae3c5d3d497933965a5e75b01\"" Sep 12 17:47:52.113495 kubelet[2397]: E0912 17:47:52.113476 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:52.115071 containerd[1572]: time="2025-09-12T17:47:52.115033138Z" level=info msg="CreateContainer within sandbox \"f95e004afe9973c72939bfa226a2259b4d64027ae3c5d3d497933965a5e75b01\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:47:52.115656 containerd[1572]: time="2025-09-12T17:47:52.115633665Z" level=info msg="connecting to shim 1bc660316dbba7603530d6d6544ec7ca9315380a10a170285ddf938748fae351" address="unix:///run/containerd/s/85ee93ac4699e5788cee1b18c31d88673ff40cd61ec9e61f98a36f8bc5807be0" protocol=ttrpc version=3 Sep 12 17:47:52.127081 containerd[1572]: time="2025-09-12T17:47:52.127045086Z" level=info msg="Container e568044a0c428321895a240652fc811f5c5e3796b65ea1d9d8b0384d06eeebe2: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:52.135580 containerd[1572]: time="2025-09-12T17:47:52.135540095Z" level=info msg="CreateContainer within sandbox \"f95e004afe9973c72939bfa226a2259b4d64027ae3c5d3d497933965a5e75b01\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e568044a0c428321895a240652fc811f5c5e3796b65ea1d9d8b0384d06eeebe2\"" Sep 12 17:47:52.135904 containerd[1572]: time="2025-09-12T17:47:52.135844391Z" level=info msg="StartContainer for \"e568044a0c428321895a240652fc811f5c5e3796b65ea1d9d8b0384d06eeebe2\"" Sep 12 17:47:52.139667 systemd[1]: Started cri-containerd-1bc660316dbba7603530d6d6544ec7ca9315380a10a170285ddf938748fae351.scope - libcontainer container 1bc660316dbba7603530d6d6544ec7ca9315380a10a170285ddf938748fae351. Sep 12 17:47:52.141148 containerd[1572]: time="2025-09-12T17:47:52.141092259Z" level=info msg="connecting to shim e568044a0c428321895a240652fc811f5c5e3796b65ea1d9d8b0384d06eeebe2" address="unix:///run/containerd/s/0d9548060fd37a021eacedc093e35fb182831f0025fa272dddb59f3cd1cb5686" protocol=ttrpc version=3 Sep 12 17:47:52.195137 containerd[1572]: time="2025-09-12T17:47:52.190436778Z" level=info msg="StartContainer for \"4434c3cdb4d66b482738bfb2a391a8893b3afd73da3124eadad119eb09ea44d3\" returns successfully" Sep 12 17:47:52.213580 systemd[1]: Started cri-containerd-e568044a0c428321895a240652fc811f5c5e3796b65ea1d9d8b0384d06eeebe2.scope - libcontainer container e568044a0c428321895a240652fc811f5c5e3796b65ea1d9d8b0384d06eeebe2. Sep 12 17:47:52.343332 containerd[1572]: time="2025-09-12T17:47:52.282721939Z" level=info msg="StartContainer for \"e568044a0c428321895a240652fc811f5c5e3796b65ea1d9d8b0384d06eeebe2\" returns successfully" Sep 12 17:47:52.363686 containerd[1572]: time="2025-09-12T17:47:52.363655492Z" level=info msg="StartContainer for \"1bc660316dbba7603530d6d6544ec7ca9315380a10a170285ddf938748fae351\" returns successfully" Sep 12 17:47:52.909407 kubelet[2397]: E0912 17:47:52.909275 2397 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:47:52.910722 kubelet[2397]: E0912 17:47:52.910660 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:52.911663 kubelet[2397]: E0912 17:47:52.911486 2397 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:47:52.911663 kubelet[2397]: E0912 17:47:52.911616 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:52.914324 kubelet[2397]: E0912 17:47:52.914306 2397 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:47:52.914694 kubelet[2397]: E0912 17:47:52.914631 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:53.381599 kubelet[2397]: I0912 17:47:53.381546 2397 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:47:53.882233 kubelet[2397]: E0912 17:47:53.881644 2397 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:47:53.916631 kubelet[2397]: E0912 17:47:53.916597 2397 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:47:53.917018 kubelet[2397]: E0912 17:47:53.916732 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:53.917276 kubelet[2397]: E0912 17:47:53.917250 2397 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:47:53.917404 kubelet[2397]: E0912 17:47:53.917367 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:53.917795 kubelet[2397]: E0912 17:47:53.917769 2397 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:47:53.917878 kubelet[2397]: E0912 17:47:53.917864 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:53.976519 kubelet[2397]: I0912 17:47:53.976486 2397 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:47:53.976519 kubelet[2397]: E0912 17:47:53.976520 2397 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:47:54.075188 kubelet[2397]: I0912 17:47:54.075089 2397 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:54.088947 kubelet[2397]: E0912 17:47:54.088878 2397 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:54.089425 kubelet[2397]: I0912 17:47:54.089200 2397 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:54.091601 kubelet[2397]: E0912 17:47:54.091580 2397 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:54.091690 kubelet[2397]: I0912 17:47:54.091677 2397 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:47:54.093402 kubelet[2397]: E0912 17:47:54.093345 2397 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:47:54.868456 kubelet[2397]: I0912 17:47:54.868377 2397 apiserver.go:52] "Watching apiserver" Sep 12 17:47:54.874487 kubelet[2397]: I0912 17:47:54.874442 2397 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:47:54.917023 kubelet[2397]: I0912 17:47:54.916989 2397 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:54.917692 kubelet[2397]: I0912 17:47:54.917251 2397 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:47:54.924096 kubelet[2397]: E0912 17:47:54.924061 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:54.979072 kubelet[2397]: E0912 17:47:54.979021 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:55.858756 systemd[1]: Reload requested from client PID 2685 ('systemctl') (unit session-9.scope)... Sep 12 17:47:55.858771 systemd[1]: Reloading... Sep 12 17:47:55.919191 kubelet[2397]: E0912 17:47:55.919155 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:55.919612 kubelet[2397]: E0912 17:47:55.919269 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:55.947419 zram_generator::config[2730]: No configuration found. Sep 12 17:47:56.175344 systemd[1]: Reloading finished in 316 ms. Sep 12 17:47:56.208200 kubelet[2397]: I0912 17:47:56.208157 2397 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:47:56.208249 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:47:56.222608 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:47:56.222912 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:47:56.222965 systemd[1]: kubelet.service: Consumed 989ms CPU time, 132.2M memory peak. Sep 12 17:47:56.224845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:47:56.443155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:47:56.447730 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:47:56.486241 kubelet[2773]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:47:56.486241 kubelet[2773]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:47:56.486241 kubelet[2773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:47:56.486649 kubelet[2773]: I0912 17:47:56.486308 2773 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:47:56.492555 kubelet[2773]: I0912 17:47:56.492526 2773 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:47:56.492555 kubelet[2773]: I0912 17:47:56.492546 2773 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:47:56.492863 kubelet[2773]: I0912 17:47:56.492839 2773 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:47:56.493990 kubelet[2773]: I0912 17:47:56.493971 2773 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:47:56.496000 kubelet[2773]: I0912 17:47:56.495972 2773 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:47:56.501082 kubelet[2773]: I0912 17:47:56.501052 2773 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:47:56.505618 kubelet[2773]: I0912 17:47:56.505599 2773 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:47:56.505896 kubelet[2773]: I0912 17:47:56.505855 2773 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:47:56.506050 kubelet[2773]: I0912 17:47:56.505883 2773 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:47:56.506050 kubelet[2773]: I0912 17:47:56.506049 2773 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:47:56.506157 kubelet[2773]: I0912 17:47:56.506059 2773 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:47:56.506157 kubelet[2773]: I0912 17:47:56.506106 2773 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:47:56.506259 kubelet[2773]: I0912 17:47:56.506241 2773 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:47:56.506358 kubelet[2773]: I0912 17:47:56.506270 2773 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:47:56.506358 kubelet[2773]: I0912 17:47:56.506295 2773 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:47:56.506358 kubelet[2773]: I0912 17:47:56.506306 2773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:47:56.509405 kubelet[2773]: I0912 17:47:56.507302 2773 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:47:56.509405 kubelet[2773]: I0912 17:47:56.507699 2773 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:47:56.509405 kubelet[2773]: I0912 17:47:56.508108 2773 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:47:56.509405 kubelet[2773]: I0912 17:47:56.508127 2773 server.go:1287] "Started kubelet" Sep 12 17:47:56.509405 kubelet[2773]: I0912 17:47:56.509144 2773 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:47:56.509758 kubelet[2773]: I0912 17:47:56.509719 2773 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:47:56.509831 kubelet[2773]: I0912 17:47:56.509796 2773 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:47:56.511589 kubelet[2773]: I0912 17:47:56.511563 2773 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:47:56.513863 kubelet[2773]: I0912 17:47:56.513833 2773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:47:56.516979 kubelet[2773]: I0912 17:47:56.516775 2773 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:47:56.518866 kubelet[2773]: I0912 17:47:56.518832 2773 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:47:56.518970 kubelet[2773]: E0912 17:47:56.518944 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:47:56.520070 kubelet[2773]: I0912 17:47:56.519950 2773 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:47:56.520160 kubelet[2773]: I0912 17:47:56.520143 2773 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:47:56.520656 kubelet[2773]: I0912 17:47:56.520623 2773 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:47:56.520766 kubelet[2773]: I0912 17:47:56.520717 2773 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:47:56.524828 kubelet[2773]: I0912 17:47:56.524679 2773 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:47:56.528808 kubelet[2773]: E0912 17:47:56.528780 2773 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:47:56.532589 kubelet[2773]: I0912 17:47:56.532444 2773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:47:56.533677 kubelet[2773]: I0912 17:47:56.533650 2773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:47:56.533719 kubelet[2773]: I0912 17:47:56.533682 2773 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:47:56.533719 kubelet[2773]: I0912 17:47:56.533704 2773 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:47:56.533719 kubelet[2773]: I0912 17:47:56.533710 2773 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:47:56.533814 kubelet[2773]: E0912 17:47:56.533763 2773 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:47:56.557567 kubelet[2773]: I0912 17:47:56.557534 2773 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:47:56.557567 kubelet[2773]: I0912 17:47:56.557555 2773 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:47:56.557567 kubelet[2773]: I0912 17:47:56.557573 2773 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:47:56.557794 kubelet[2773]: I0912 17:47:56.557763 2773 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:47:56.557829 kubelet[2773]: I0912 17:47:56.557781 2773 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:47:56.557829 kubelet[2773]: I0912 17:47:56.557808 2773 policy_none.go:49] "None policy: Start" Sep 12 17:47:56.557829 kubelet[2773]: I0912 17:47:56.557819 2773 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:47:56.557885 kubelet[2773]: I0912 17:47:56.557832 2773 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:47:56.557950 kubelet[2773]: I0912 17:47:56.557933 2773 state_mem.go:75] "Updated machine memory state" Sep 12 17:47:56.561726 kubelet[2773]: I0912 17:47:56.561680 2773 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:47:56.561869 kubelet[2773]: I0912 17:47:56.561852 2773 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:47:56.561895 kubelet[2773]: I0912 17:47:56.561866 2773 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:47:56.562113 kubelet[2773]: I0912 17:47:56.562043 2773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:47:56.567282 kubelet[2773]: E0912 17:47:56.562917 2773 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:47:56.635313 kubelet[2773]: I0912 17:47:56.635273 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:47:56.635313 kubelet[2773]: I0912 17:47:56.635342 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:56.635612 kubelet[2773]: I0912 17:47:56.635414 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:56.668293 kubelet[2773]: I0912 17:47:56.668259 2773 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:47:56.721569 kubelet[2773]: I0912 17:47:56.721473 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73cc5708c4731797ccc6b94c3b0e7ed3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"73cc5708c4731797ccc6b94c3b0e7ed3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:56.721569 kubelet[2773]: I0912 17:47:56.721504 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:56.721569 kubelet[2773]: I0912 17:47:56.721522 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:56.721569 kubelet[2773]: I0912 17:47:56.721535 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:47:56.721569 kubelet[2773]: I0912 17:47:56.721549 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73cc5708c4731797ccc6b94c3b0e7ed3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"73cc5708c4731797ccc6b94c3b0e7ed3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:56.721727 kubelet[2773]: I0912 17:47:56.721566 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73cc5708c4731797ccc6b94c3b0e7ed3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"73cc5708c4731797ccc6b94c3b0e7ed3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:56.721727 kubelet[2773]: I0912 17:47:56.721584 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:56.721727 kubelet[2773]: I0912 17:47:56.721598 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:56.721727 kubelet[2773]: I0912 17:47:56.721612 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:47:57.072475 kubelet[2773]: E0912 17:47:57.072346 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:57.094679 kubelet[2773]: E0912 17:47:57.094620 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:57.094878 kubelet[2773]: E0912 17:47:57.094856 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:57.130643 kubelet[2773]: E0912 17:47:57.130595 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:47:57.130859 kubelet[2773]: E0912 17:47:57.130803 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:57.571528 kubelet[2773]: I0912 17:47:57.507055 2773 apiserver.go:52] "Watching apiserver" Sep 12 17:47:57.571528 kubelet[2773]: I0912 17:47:57.520045 2773 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:47:57.571528 kubelet[2773]: I0912 17:47:57.548692 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:57.571528 kubelet[2773]: E0912 17:47:57.548732 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:57.571528 kubelet[2773]: E0912 17:47:57.548764 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:57.610661 kubelet[2773]: I0912 17:47:57.610633 2773 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 17:47:57.610774 kubelet[2773]: I0912 17:47:57.610710 2773 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:47:58.427682 kubelet[2773]: E0912 17:47:58.427627 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:47:58.428007 kubelet[2773]: E0912 17:47:58.427851 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:58.550625 kubelet[2773]: E0912 17:47:58.550592 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:58.550761 kubelet[2773]: E0912 17:47:58.550694 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:58.628059 kubelet[2773]: E0912 17:47:58.628011 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:59.132246 kubelet[2773]: I0912 17:47:59.132136 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.132113561 podStartE2EDuration="5.132113561s" podCreationTimestamp="2025-09-12 17:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:47:58.428136477 +0000 UTC m=+1.976574210" watchObservedRunningTime="2025-09-12 17:47:59.132113561 +0000 UTC m=+2.680551294" Sep 12 17:47:59.529193 kubelet[2773]: I0912 17:47:59.528957 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.5289255539999997 podStartE2EDuration="3.528925554s" podCreationTimestamp="2025-09-12 17:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:47:59.528024024 +0000 UTC m=+3.076461737" watchObservedRunningTime="2025-09-12 17:47:59.528925554 +0000 UTC m=+3.077363277" Sep 12 17:47:59.529193 kubelet[2773]: I0912 17:47:59.529145 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.529131704 podStartE2EDuration="5.529131704s" podCreationTimestamp="2025-09-12 17:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:47:59.131979457 +0000 UTC m=+2.680417180" watchObservedRunningTime="2025-09-12 17:47:59.529131704 +0000 UTC m=+3.077569417" Sep 12 17:47:59.551914 kubelet[2773]: E0912 17:47:59.551872 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:00.281358 sudo[2809]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:48:00.281744 sudo[2809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:48:00.624153 sudo[2809]: pam_unix(sudo:session): session closed for user root Sep 12 17:48:02.247626 sudo[1804]: pam_unix(sudo:session): session closed for user root Sep 12 17:48:02.249464 sshd[1803]: Connection closed by 10.0.0.1 port 60114 Sep 12 17:48:02.250095 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:02.255341 systemd[1]: sshd@8-10.0.0.100:22-10.0.0.1:60114.service: Deactivated successfully. Sep 12 17:48:02.258350 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:48:02.258692 systemd[1]: session-9.scope: Consumed 6.185s CPU time, 261.2M memory peak. Sep 12 17:48:02.260179 systemd-logind[1557]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:48:02.261607 systemd-logind[1557]: Removed session 9. Sep 12 17:48:02.874787 kubelet[2773]: E0912 17:48:02.874739 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:03.557830 kubelet[2773]: E0912 17:48:03.557793 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:04.559050 kubelet[2773]: E0912 17:48:04.558977 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:04.785149 kubelet[2773]: I0912 17:48:04.785080 2773 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:48:04.785573 containerd[1572]: time="2025-09-12T17:48:04.785534266Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:48:04.786006 kubelet[2773]: I0912 17:48:04.785721 2773 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:48:04.963711 systemd[1]: Created slice kubepods-besteffort-podc361d279_0f5d_4a77_965b_3b2f10a8489e.slice - libcontainer container kubepods-besteffort-podc361d279_0f5d_4a77_965b_3b2f10a8489e.slice. Sep 12 17:48:04.972294 kubelet[2773]: I0912 17:48:04.971054 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c361d279-0f5d-4a77-965b-3b2f10a8489e-xtables-lock\") pod \"kube-proxy-2d74t\" (UID: \"c361d279-0f5d-4a77-965b-3b2f10a8489e\") " pod="kube-system/kube-proxy-2d74t" Sep 12 17:48:04.972587 kubelet[2773]: I0912 17:48:04.972572 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c361d279-0f5d-4a77-965b-3b2f10a8489e-lib-modules\") pod \"kube-proxy-2d74t\" (UID: \"c361d279-0f5d-4a77-965b-3b2f10a8489e\") " pod="kube-system/kube-proxy-2d74t" Sep 12 17:48:04.972739 kubelet[2773]: I0912 17:48:04.972684 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c361d279-0f5d-4a77-965b-3b2f10a8489e-kube-proxy\") pod \"kube-proxy-2d74t\" (UID: \"c361d279-0f5d-4a77-965b-3b2f10a8489e\") " pod="kube-system/kube-proxy-2d74t" Sep 12 17:48:04.973814 kubelet[2773]: I0912 17:48:04.972706 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k58c\" (UniqueName: \"kubernetes.io/projected/c361d279-0f5d-4a77-965b-3b2f10a8489e-kube-api-access-8k58c\") pod \"kube-proxy-2d74t\" (UID: \"c361d279-0f5d-4a77-965b-3b2f10a8489e\") " pod="kube-system/kube-proxy-2d74t" Sep 12 17:48:05.002122 systemd[1]: Created slice kubepods-burstable-pode5ff8fbb_8afd_4b9c_8110_71997e046f75.slice - libcontainer container kubepods-burstable-pode5ff8fbb_8afd_4b9c_8110_71997e046f75.slice. Sep 12 17:48:05.073463 kubelet[2773]: I0912 17:48:05.073410 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-hostproc\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073463 kubelet[2773]: I0912 17:48:05.073465 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-host-proc-sys-kernel\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073710 kubelet[2773]: I0912 17:48:05.073487 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cilium-run\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073710 kubelet[2773]: I0912 17:48:05.073509 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5ff8fbb-8afd-4b9c-8110-71997e046f75-clustermesh-secrets\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073710 kubelet[2773]: I0912 17:48:05.073530 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cilium-config-path\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073710 kubelet[2773]: I0912 17:48:05.073557 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-host-proc-sys-net\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073710 kubelet[2773]: I0912 17:48:05.073573 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdtdj\" (UniqueName: \"kubernetes.io/projected/e5ff8fbb-8afd-4b9c-8110-71997e046f75-kube-api-access-xdtdj\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073710 kubelet[2773]: I0912 17:48:05.073593 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-bpf-maps\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073867 kubelet[2773]: I0912 17:48:05.073675 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5ff8fbb-8afd-4b9c-8110-71997e046f75-hubble-tls\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073867 kubelet[2773]: I0912 17:48:05.073747 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-etc-cni-netd\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073867 kubelet[2773]: I0912 17:48:05.073763 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-xtables-lock\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073867 kubelet[2773]: I0912 17:48:05.073788 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cni-path\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073867 kubelet[2773]: I0912 17:48:05.073817 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cilium-cgroup\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.073867 kubelet[2773]: I0912 17:48:05.073832 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-lib-modules\") pod \"cilium-d7zws\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " pod="kube-system/cilium-d7zws" Sep 12 17:48:05.078628 kubelet[2773]: E0912 17:48:05.078560 2773 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 17:48:05.078628 kubelet[2773]: E0912 17:48:05.078605 2773 projected.go:194] Error preparing data for projected volume kube-api-access-8k58c for pod kube-system/kube-proxy-2d74t: configmap "kube-root-ca.crt" not found Sep 12 17:48:05.078740 kubelet[2773]: E0912 17:48:05.078700 2773 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c361d279-0f5d-4a77-965b-3b2f10a8489e-kube-api-access-8k58c podName:c361d279-0f5d-4a77-965b-3b2f10a8489e nodeName:}" failed. No retries permitted until 2025-09-12 17:48:05.578680294 +0000 UTC m=+9.127118087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8k58c" (UniqueName: "kubernetes.io/projected/c361d279-0f5d-4a77-965b-3b2f10a8489e-kube-api-access-8k58c") pod "kube-proxy-2d74t" (UID: "c361d279-0f5d-4a77-965b-3b2f10a8489e") : configmap "kube-root-ca.crt" not found Sep 12 17:48:05.183038 kubelet[2773]: E0912 17:48:05.182153 2773 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 17:48:05.183038 kubelet[2773]: E0912 17:48:05.182188 2773 projected.go:194] Error preparing data for projected volume kube-api-access-xdtdj for pod kube-system/cilium-d7zws: configmap "kube-root-ca.crt" not found Sep 12 17:48:05.183038 kubelet[2773]: E0912 17:48:05.182248 2773 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e5ff8fbb-8afd-4b9c-8110-71997e046f75-kube-api-access-xdtdj podName:e5ff8fbb-8afd-4b9c-8110-71997e046f75 nodeName:}" failed. No retries permitted until 2025-09-12 17:48:05.682223168 +0000 UTC m=+9.230660891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xdtdj" (UniqueName: "kubernetes.io/projected/e5ff8fbb-8afd-4b9c-8110-71997e046f75-kube-api-access-xdtdj") pod "cilium-d7zws" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75") : configmap "kube-root-ca.crt" not found Sep 12 17:48:05.670356 systemd[1]: Created slice kubepods-besteffort-podfe04a8aa_43fe_421f_a03f_d2949bfd6a01.slice - libcontainer container kubepods-besteffort-podfe04a8aa_43fe_421f_a03f_d2949bfd6a01.slice. Sep 12 17:48:05.678189 kubelet[2773]: I0912 17:48:05.678130 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4pmq\" (UniqueName: \"kubernetes.io/projected/fe04a8aa-43fe-421f-a03f-d2949bfd6a01-kube-api-access-l4pmq\") pod \"cilium-operator-6c4d7847fc-6b4sl\" (UID: \"fe04a8aa-43fe-421f-a03f-d2949bfd6a01\") " pod="kube-system/cilium-operator-6c4d7847fc-6b4sl" Sep 12 17:48:05.678189 kubelet[2773]: I0912 17:48:05.678184 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe04a8aa-43fe-421f-a03f-d2949bfd6a01-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6b4sl\" (UID: \"fe04a8aa-43fe-421f-a03f-d2949bfd6a01\") " pod="kube-system/cilium-operator-6c4d7847fc-6b4sl" Sep 12 17:48:05.877625 kubelet[2773]: E0912 17:48:05.877545 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:05.878450 containerd[1572]: time="2025-09-12T17:48:05.878316142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2d74t,Uid:c361d279-0f5d-4a77-965b-3b2f10a8489e,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:05.904421 containerd[1572]: time="2025-09-12T17:48:05.904103722Z" level=info msg="connecting to shim 9d2a0b87360111507707ea0f495df959fa4f32ddd5889af1a922cca3e7c30ac8" address="unix:///run/containerd/s/84e9edefa903b481361b478b4a2b97e2a7d114a995b82f9d594990193f9b139f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:05.907906 kubelet[2773]: E0912 17:48:05.907860 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:05.908747 containerd[1572]: time="2025-09-12T17:48:05.908697854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7zws,Uid:e5ff8fbb-8afd-4b9c-8110-71997e046f75,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:05.930528 containerd[1572]: time="2025-09-12T17:48:05.930373992Z" level=info msg="connecting to shim a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579" address="unix:///run/containerd/s/686d2b2400324214b02d0ae216aff73f73848ec7a9d0b99a0b32d8669dc7b015" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:05.932671 systemd[1]: Started cri-containerd-9d2a0b87360111507707ea0f495df959fa4f32ddd5889af1a922cca3e7c30ac8.scope - libcontainer container 9d2a0b87360111507707ea0f495df959fa4f32ddd5889af1a922cca3e7c30ac8. Sep 12 17:48:05.961852 containerd[1572]: time="2025-09-12T17:48:05.961807165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2d74t,Uid:c361d279-0f5d-4a77-965b-3b2f10a8489e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d2a0b87360111507707ea0f495df959fa4f32ddd5889af1a922cca3e7c30ac8\"" Sep 12 17:48:05.963356 kubelet[2773]: E0912 17:48:05.963004 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:05.966411 containerd[1572]: time="2025-09-12T17:48:05.966354830Z" level=info msg="CreateContainer within sandbox \"9d2a0b87360111507707ea0f495df959fa4f32ddd5889af1a922cca3e7c30ac8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:48:05.977213 kubelet[2773]: E0912 17:48:05.977186 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:05.978065 containerd[1572]: time="2025-09-12T17:48:05.978028443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6b4sl,Uid:fe04a8aa-43fe-421f-a03f-d2949bfd6a01,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:05.979541 systemd[1]: Started cri-containerd-a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579.scope - libcontainer container a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579. Sep 12 17:48:05.982441 containerd[1572]: time="2025-09-12T17:48:05.982418880Z" level=info msg="Container 30bc6a4f8ad383c9e4f38965e220b17ce61f462282b2b9fb9980af1913d56dd4: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:06.001151 containerd[1572]: time="2025-09-12T17:48:06.001017083Z" level=info msg="CreateContainer within sandbox \"9d2a0b87360111507707ea0f495df959fa4f32ddd5889af1a922cca3e7c30ac8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"30bc6a4f8ad383c9e4f38965e220b17ce61f462282b2b9fb9980af1913d56dd4\"" Sep 12 17:48:06.003576 containerd[1572]: time="2025-09-12T17:48:06.003546615Z" level=info msg="StartContainer for \"30bc6a4f8ad383c9e4f38965e220b17ce61f462282b2b9fb9980af1913d56dd4\"" Sep 12 17:48:06.006024 containerd[1572]: time="2025-09-12T17:48:06.005803764Z" level=info msg="connecting to shim 30bc6a4f8ad383c9e4f38965e220b17ce61f462282b2b9fb9980af1913d56dd4" address="unix:///run/containerd/s/84e9edefa903b481361b478b4a2b97e2a7d114a995b82f9d594990193f9b139f" protocol=ttrpc version=3 Sep 12 17:48:06.012957 containerd[1572]: time="2025-09-12T17:48:06.012847675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7zws,Uid:e5ff8fbb-8afd-4b9c-8110-71997e046f75,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\"" Sep 12 17:48:06.013877 kubelet[2773]: E0912 17:48:06.013846 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:06.016338 containerd[1572]: time="2025-09-12T17:48:06.016294264Z" level=info msg="connecting to shim fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4" address="unix:///run/containerd/s/cce6e16f3c9e928e845d4ebb40010df3fb0f8736f8722de67299c9f49b281536" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:06.016637 containerd[1572]: time="2025-09-12T17:48:06.016590863Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:48:06.075535 systemd[1]: Started cri-containerd-30bc6a4f8ad383c9e4f38965e220b17ce61f462282b2b9fb9980af1913d56dd4.scope - libcontainer container 30bc6a4f8ad383c9e4f38965e220b17ce61f462282b2b9fb9980af1913d56dd4. Sep 12 17:48:06.077343 systemd[1]: Started cri-containerd-fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4.scope - libcontainer container fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4. Sep 12 17:48:06.160131 containerd[1572]: time="2025-09-12T17:48:06.160077904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6b4sl,Uid:fe04a8aa-43fe-421f-a03f-d2949bfd6a01,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\"" Sep 12 17:48:06.161584 kubelet[2773]: E0912 17:48:06.161544 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:06.165978 containerd[1572]: time="2025-09-12T17:48:06.165935402Z" level=info msg="StartContainer for \"30bc6a4f8ad383c9e4f38965e220b17ce61f462282b2b9fb9980af1913d56dd4\" returns successfully" Sep 12 17:48:06.566886 kubelet[2773]: E0912 17:48:06.566859 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:08.633691 kubelet[2773]: E0912 17:48:08.633657 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:08.643080 kubelet[2773]: I0912 17:48:08.643008 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2d74t" podStartSLOduration=4.642991562 podStartE2EDuration="4.642991562s" podCreationTimestamp="2025-09-12 17:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:06.767125145 +0000 UTC m=+10.315562868" watchObservedRunningTime="2025-09-12 17:48:08.642991562 +0000 UTC m=+12.191429285" Sep 12 17:48:08.961054 kubelet[2773]: E0912 17:48:08.960758 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:09.573327 kubelet[2773]: E0912 17:48:09.573269 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:12.611038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3919939362.mount: Deactivated successfully. Sep 12 17:48:18.315377 containerd[1572]: time="2025-09-12T17:48:18.315296761Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:18.326740 containerd[1572]: time="2025-09-12T17:48:18.326686857Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 17:48:18.366176 containerd[1572]: time="2025-09-12T17:48:18.366129149Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:18.367872 containerd[1572]: time="2025-09-12T17:48:18.367814836Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.351141318s" Sep 12 17:48:18.367872 containerd[1572]: time="2025-09-12T17:48:18.367868157Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 17:48:18.368896 containerd[1572]: time="2025-09-12T17:48:18.368859599Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:48:18.371073 containerd[1572]: time="2025-09-12T17:48:18.371035897Z" level=info msg="CreateContainer within sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:48:18.706891 containerd[1572]: time="2025-09-12T17:48:18.706824300Z" level=info msg="Container 9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:18.860576 containerd[1572]: time="2025-09-12T17:48:18.860528445Z" level=info msg="CreateContainer within sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\"" Sep 12 17:48:18.861047 containerd[1572]: time="2025-09-12T17:48:18.861021151Z" level=info msg="StartContainer for \"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\"" Sep 12 17:48:18.861939 containerd[1572]: time="2025-09-12T17:48:18.861914499Z" level=info msg="connecting to shim 9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246" address="unix:///run/containerd/s/686d2b2400324214b02d0ae216aff73f73848ec7a9d0b99a0b32d8669dc7b015" protocol=ttrpc version=3 Sep 12 17:48:18.883511 systemd[1]: Started cri-containerd-9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246.scope - libcontainer container 9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246. Sep 12 17:48:18.937935 systemd[1]: cri-containerd-9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246.scope: Deactivated successfully. Sep 12 17:48:18.938317 systemd[1]: cri-containerd-9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246.scope: Consumed 26ms CPU time, 6.6M memory peak, 4K read from disk, 2.1M written to disk. Sep 12 17:48:18.939609 containerd[1572]: time="2025-09-12T17:48:18.939563124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\" id:\"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\" pid:3192 exited_at:{seconds:1757699298 nanos:939009344}" Sep 12 17:48:19.023839 containerd[1572]: time="2025-09-12T17:48:19.023702405Z" level=info msg="received exit event container_id:\"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\" id:\"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\" pid:3192 exited_at:{seconds:1757699298 nanos:939009344}" Sep 12 17:48:19.024814 containerd[1572]: time="2025-09-12T17:48:19.024780129Z" level=info msg="StartContainer for \"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\" returns successfully" Sep 12 17:48:19.045964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246-rootfs.mount: Deactivated successfully. Sep 12 17:48:19.589887 kubelet[2773]: E0912 17:48:19.589849 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:20.593339 kubelet[2773]: E0912 17:48:20.593292 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:20.595151 containerd[1572]: time="2025-09-12T17:48:20.595101010Z" level=info msg="CreateContainer within sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:48:20.737732 containerd[1572]: time="2025-09-12T17:48:20.737680998Z" level=info msg="Container 1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:20.953480 containerd[1572]: time="2025-09-12T17:48:20.953327841Z" level=info msg="CreateContainer within sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\"" Sep 12 17:48:20.953764 containerd[1572]: time="2025-09-12T17:48:20.953739113Z" level=info msg="StartContainer for \"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\"" Sep 12 17:48:20.954568 containerd[1572]: time="2025-09-12T17:48:20.954515051Z" level=info msg="connecting to shim 1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43" address="unix:///run/containerd/s/686d2b2400324214b02d0ae216aff73f73848ec7a9d0b99a0b32d8669dc7b015" protocol=ttrpc version=3 Sep 12 17:48:20.981526 systemd[1]: Started cri-containerd-1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43.scope - libcontainer container 1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43. Sep 12 17:48:21.136022 containerd[1572]: time="2025-09-12T17:48:21.135978890Z" level=info msg="StartContainer for \"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\" returns successfully" Sep 12 17:48:21.288864 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:48:21.289098 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:48:21.289687 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:48:21.291227 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:48:21.292623 containerd[1572]: time="2025-09-12T17:48:21.292583867Z" level=info msg="received exit event container_id:\"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\" id:\"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\" pid:3236 exited_at:{seconds:1757699301 nanos:292405954}" Sep 12 17:48:21.292755 containerd[1572]: time="2025-09-12T17:48:21.292642728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\" id:\"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\" pid:3236 exited_at:{seconds:1757699301 nanos:292405954}" Sep 12 17:48:21.293289 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:48:21.293772 systemd[1]: cri-containerd-1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43.scope: Deactivated successfully. Sep 12 17:48:21.311628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43-rootfs.mount: Deactivated successfully. Sep 12 17:48:21.364232 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:48:21.596785 kubelet[2773]: E0912 17:48:21.596757 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:22.600598 kubelet[2773]: E0912 17:48:22.600566 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:22.602825 containerd[1572]: time="2025-09-12T17:48:22.602778665Z" level=info msg="CreateContainer within sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:48:23.045469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094163291.mount: Deactivated successfully. Sep 12 17:48:23.438964 containerd[1572]: time="2025-09-12T17:48:23.438889929Z" level=info msg="Container 1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:23.443726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount574387342.mount: Deactivated successfully. Sep 12 17:48:24.153584 containerd[1572]: time="2025-09-12T17:48:24.153524623Z" level=info msg="CreateContainer within sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\"" Sep 12 17:48:24.154166 containerd[1572]: time="2025-09-12T17:48:24.154111084Z" level=info msg="StartContainer for \"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\"" Sep 12 17:48:24.156171 containerd[1572]: time="2025-09-12T17:48:24.156069461Z" level=info msg="connecting to shim 1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1" address="unix:///run/containerd/s/686d2b2400324214b02d0ae216aff73f73848ec7a9d0b99a0b32d8669dc7b015" protocol=ttrpc version=3 Sep 12 17:48:24.178592 systemd[1]: Started cri-containerd-1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1.scope - libcontainer container 1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1. Sep 12 17:48:24.221147 systemd[1]: cri-containerd-1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1.scope: Deactivated successfully. Sep 12 17:48:24.223128 containerd[1572]: time="2025-09-12T17:48:24.223077290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\" id:\"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\" pid:3289 exited_at:{seconds:1757699304 nanos:221729560}" Sep 12 17:48:24.649560 containerd[1572]: time="2025-09-12T17:48:24.649369595Z" level=info msg="received exit event container_id:\"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\" id:\"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\" pid:3289 exited_at:{seconds:1757699304 nanos:221729560}" Sep 12 17:48:24.660046 containerd[1572]: time="2025-09-12T17:48:24.659963503Z" level=info msg="StartContainer for \"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\" returns successfully" Sep 12 17:48:24.673160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1-rootfs.mount: Deactivated successfully. Sep 12 17:48:25.658413 kubelet[2773]: E0912 17:48:25.658363 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:25.660349 containerd[1572]: time="2025-09-12T17:48:25.660311443Z" level=info msg="CreateContainer within sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:48:25.939568 containerd[1572]: time="2025-09-12T17:48:25.939436356Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:25.979976 containerd[1572]: time="2025-09-12T17:48:25.979930604Z" level=info msg="Container 59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:26.028507 containerd[1572]: time="2025-09-12T17:48:26.028462405Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 17:48:26.254098 containerd[1572]: time="2025-09-12T17:48:26.253936953Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:26.255733 containerd[1572]: time="2025-09-12T17:48:26.255667501Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.886669311s" Sep 12 17:48:26.255733 containerd[1572]: time="2025-09-12T17:48:26.255723096Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 17:48:26.256127 containerd[1572]: time="2025-09-12T17:48:26.256083551Z" level=info msg="CreateContainer within sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\"" Sep 12 17:48:26.256943 containerd[1572]: time="2025-09-12T17:48:26.256899674Z" level=info msg="StartContainer for \"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\"" Sep 12 17:48:26.257912 containerd[1572]: time="2025-09-12T17:48:26.257882328Z" level=info msg="connecting to shim 59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8" address="unix:///run/containerd/s/686d2b2400324214b02d0ae216aff73f73848ec7a9d0b99a0b32d8669dc7b015" protocol=ttrpc version=3 Sep 12 17:48:26.259486 containerd[1572]: time="2025-09-12T17:48:26.259438861Z" level=info msg="CreateContainer within sandbox \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:48:26.292643 systemd[1]: Started cri-containerd-59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8.scope - libcontainer container 59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8. Sep 12 17:48:26.323289 systemd[1]: cri-containerd-59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8.scope: Deactivated successfully. Sep 12 17:48:26.324200 containerd[1572]: time="2025-09-12T17:48:26.324149088Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\" id:\"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\" pid:3339 exited_at:{seconds:1757699306 nanos:323725883}" Sep 12 17:48:26.487713 containerd[1572]: time="2025-09-12T17:48:26.487639034Z" level=info msg="received exit event container_id:\"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\" id:\"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\" pid:3339 exited_at:{seconds:1757699306 nanos:323725883}" Sep 12 17:48:26.495406 containerd[1572]: time="2025-09-12T17:48:26.495358875Z" level=info msg="StartContainer for \"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\" returns successfully" Sep 12 17:48:26.508603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8-rootfs.mount: Deactivated successfully. Sep 12 17:48:26.740056 kubelet[2773]: E0912 17:48:26.739588 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:27.140611 containerd[1572]: time="2025-09-12T17:48:27.140540975Z" level=info msg="Container 27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:27.412958 containerd[1572]: time="2025-09-12T17:48:27.412839260Z" level=info msg="CreateContainer within sandbox \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\"" Sep 12 17:48:27.413501 containerd[1572]: time="2025-09-12T17:48:27.413463080Z" level=info msg="StartContainer for \"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\"" Sep 12 17:48:27.414508 containerd[1572]: time="2025-09-12T17:48:27.414484447Z" level=info msg="connecting to shim 27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b" address="unix:///run/containerd/s/cce6e16f3c9e928e845d4ebb40010df3fb0f8736f8722de67299c9f49b281536" protocol=ttrpc version=3 Sep 12 17:48:27.442664 systemd[1]: Started cri-containerd-27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b.scope - libcontainer container 27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b. Sep 12 17:48:27.532401 containerd[1572]: time="2025-09-12T17:48:27.532336041Z" level=info msg="StartContainer for \"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\" returns successfully" Sep 12 17:48:27.741746 kubelet[2773]: E0912 17:48:27.741627 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:27.745689 kubelet[2773]: E0912 17:48:27.745650 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:27.747420 containerd[1572]: time="2025-09-12T17:48:27.747369972Z" level=info msg="CreateContainer within sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:48:27.849110 kubelet[2773]: I0912 17:48:27.849051 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6b4sl" podStartSLOduration=2.757414765 podStartE2EDuration="22.849029884s" podCreationTimestamp="2025-09-12 17:48:05 +0000 UTC" firstStartedPulling="2025-09-12 17:48:06.164862372 +0000 UTC m=+9.713300095" lastFinishedPulling="2025-09-12 17:48:26.256477491 +0000 UTC m=+29.804915214" observedRunningTime="2025-09-12 17:48:27.848698742 +0000 UTC m=+31.397136465" watchObservedRunningTime="2025-09-12 17:48:27.849029884 +0000 UTC m=+31.397467607" Sep 12 17:48:28.040505 containerd[1572]: time="2025-09-12T17:48:28.040363035Z" level=info msg="Container b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:28.240189 containerd[1572]: time="2025-09-12T17:48:28.240046089Z" level=info msg="CreateContainer within sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\"" Sep 12 17:48:28.240621 containerd[1572]: time="2025-09-12T17:48:28.240582346Z" level=info msg="StartContainer for \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\"" Sep 12 17:48:28.241609 containerd[1572]: time="2025-09-12T17:48:28.241575901Z" level=info msg="connecting to shim b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017" address="unix:///run/containerd/s/686d2b2400324214b02d0ae216aff73f73848ec7a9d0b99a0b32d8669dc7b015" protocol=ttrpc version=3 Sep 12 17:48:28.267599 systemd[1]: Started cri-containerd-b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017.scope - libcontainer container b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017. Sep 12 17:48:28.396469 containerd[1572]: time="2025-09-12T17:48:28.396424867Z" level=info msg="StartContainer for \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" returns successfully" Sep 12 17:48:28.501652 containerd[1572]: time="2025-09-12T17:48:28.501597741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" id:\"f8276f141bdd7dcebc7d0eedcfa8ea33c38604f428278ccc43c37b54bd33b553\" pid:3450 exited_at:{seconds:1757699308 nanos:501147946}" Sep 12 17:48:28.584664 kubelet[2773]: I0912 17:48:28.584628 2773 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:48:28.755021 kubelet[2773]: E0912 17:48:28.754900 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:28.756834 kubelet[2773]: E0912 17:48:28.756795 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:28.941769 systemd[1]: Created slice kubepods-burstable-podc97f611a_323d_49ed_a2d2_6965cf06eeb9.slice - libcontainer container kubepods-burstable-podc97f611a_323d_49ed_a2d2_6965cf06eeb9.slice. Sep 12 17:48:28.960542 systemd[1]: Created slice kubepods-burstable-pod69995e6d_5308_42a1_b40a_99caa93aa87e.slice - libcontainer container kubepods-burstable-pod69995e6d_5308_42a1_b40a_99caa93aa87e.slice. Sep 12 17:48:29.046176 kubelet[2773]: I0912 17:48:29.044047 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9p8r\" (UniqueName: \"kubernetes.io/projected/69995e6d-5308-42a1-b40a-99caa93aa87e-kube-api-access-c9p8r\") pod \"coredns-668d6bf9bc-xpdrs\" (UID: \"69995e6d-5308-42a1-b40a-99caa93aa87e\") " pod="kube-system/coredns-668d6bf9bc-xpdrs" Sep 12 17:48:29.046176 kubelet[2773]: I0912 17:48:29.044118 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsswx\" (UniqueName: \"kubernetes.io/projected/c97f611a-323d-49ed-a2d2-6965cf06eeb9-kube-api-access-xsswx\") pod \"coredns-668d6bf9bc-f46xf\" (UID: \"c97f611a-323d-49ed-a2d2-6965cf06eeb9\") " pod="kube-system/coredns-668d6bf9bc-f46xf" Sep 12 17:48:29.046176 kubelet[2773]: I0912 17:48:29.044162 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69995e6d-5308-42a1-b40a-99caa93aa87e-config-volume\") pod \"coredns-668d6bf9bc-xpdrs\" (UID: \"69995e6d-5308-42a1-b40a-99caa93aa87e\") " pod="kube-system/coredns-668d6bf9bc-xpdrs" Sep 12 17:48:29.046176 kubelet[2773]: I0912 17:48:29.044189 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c97f611a-323d-49ed-a2d2-6965cf06eeb9-config-volume\") pod \"coredns-668d6bf9bc-f46xf\" (UID: \"c97f611a-323d-49ed-a2d2-6965cf06eeb9\") " pod="kube-system/coredns-668d6bf9bc-f46xf" Sep 12 17:48:29.259854 kubelet[2773]: E0912 17:48:29.259791 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:29.262754 containerd[1572]: time="2025-09-12T17:48:29.262711851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f46xf,Uid:c97f611a-323d-49ed-a2d2-6965cf06eeb9,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:29.265039 kubelet[2773]: E0912 17:48:29.265003 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:29.265539 containerd[1572]: time="2025-09-12T17:48:29.265450130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xpdrs,Uid:69995e6d-5308-42a1-b40a-99caa93aa87e,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:29.283841 kubelet[2773]: I0912 17:48:29.283748 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d7zws" podStartSLOduration=12.930047197 podStartE2EDuration="25.283724687s" podCreationTimestamp="2025-09-12 17:48:04 +0000 UTC" firstStartedPulling="2025-09-12 17:48:06.014995328 +0000 UTC m=+9.563433051" lastFinishedPulling="2025-09-12 17:48:18.368672817 +0000 UTC m=+21.917110541" observedRunningTime="2025-09-12 17:48:29.215424787 +0000 UTC m=+32.763862530" watchObservedRunningTime="2025-09-12 17:48:29.283724687 +0000 UTC m=+32.832162430" Sep 12 17:48:29.757231 kubelet[2773]: E0912 17:48:29.757187 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:30.759612 kubelet[2773]: E0912 17:48:30.759564 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:31.525008 systemd-networkd[1489]: cilium_host: Link UP Sep 12 17:48:31.525203 systemd-networkd[1489]: cilium_net: Link UP Sep 12 17:48:31.525442 systemd-networkd[1489]: cilium_net: Gained carrier Sep 12 17:48:31.525657 systemd-networkd[1489]: cilium_host: Gained carrier Sep 12 17:48:31.631881 systemd-networkd[1489]: cilium_vxlan: Link UP Sep 12 17:48:31.631891 systemd-networkd[1489]: cilium_vxlan: Gained carrier Sep 12 17:48:31.857439 kernel: NET: Registered PF_ALG protocol family Sep 12 17:48:31.872525 systemd-networkd[1489]: cilium_host: Gained IPv6LL Sep 12 17:48:32.064579 systemd-networkd[1489]: cilium_net: Gained IPv6LL Sep 12 17:48:32.554109 systemd-networkd[1489]: lxc_health: Link UP Sep 12 17:48:32.554514 systemd-networkd[1489]: lxc_health: Gained carrier Sep 12 17:48:32.879721 systemd-networkd[1489]: lxc17a81b9e480c: Link UP Sep 12 17:48:32.888426 kernel: eth0: renamed from tmp47e30 Sep 12 17:48:32.888898 systemd-networkd[1489]: lxc17a81b9e480c: Gained carrier Sep 12 17:48:32.897659 systemd-networkd[1489]: cilium_vxlan: Gained IPv6LL Sep 12 17:48:33.018028 kernel: eth0: renamed from tmp525fc Sep 12 17:48:33.018697 systemd-networkd[1489]: lxca6f4e7d43fd8: Link UP Sep 12 17:48:33.020912 systemd-networkd[1489]: lxca6f4e7d43fd8: Gained carrier Sep 12 17:48:33.910207 kubelet[2773]: E0912 17:48:33.910169 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:34.368677 systemd-networkd[1489]: lxc_health: Gained IPv6LL Sep 12 17:48:34.766122 kubelet[2773]: E0912 17:48:34.765969 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:34.881119 systemd-networkd[1489]: lxc17a81b9e480c: Gained IPv6LL Sep 12 17:48:34.944625 systemd-networkd[1489]: lxca6f4e7d43fd8: Gained IPv6LL Sep 12 17:48:35.767706 kubelet[2773]: E0912 17:48:35.767668 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:36.646490 containerd[1572]: time="2025-09-12T17:48:36.646364922Z" level=info msg="connecting to shim 525fcf7e14d4166598b5ba6c12fc644a64d4de3b74af54aa777b58ec0f34a692" address="unix:///run/containerd/s/eeb58f0b242fe6c2df7bc3178a14fa93650dd2d7fa7724016ee5ce9fb7d4960a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:36.647186 containerd[1572]: time="2025-09-12T17:48:36.646443249Z" level=info msg="connecting to shim 47e3080b97997e951454e378d6f2cb11ab1704168656f07249ff701235f3bda1" address="unix:///run/containerd/s/bd24b312be4cbfd90cb397df3cb46ee183fd87469358e2922861977b1be66dce" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:36.674687 systemd[1]: Started cri-containerd-525fcf7e14d4166598b5ba6c12fc644a64d4de3b74af54aa777b58ec0f34a692.scope - libcontainer container 525fcf7e14d4166598b5ba6c12fc644a64d4de3b74af54aa777b58ec0f34a692. Sep 12 17:48:36.682190 systemd[1]: Started cri-containerd-47e3080b97997e951454e378d6f2cb11ab1704168656f07249ff701235f3bda1.scope - libcontainer container 47e3080b97997e951454e378d6f2cb11ab1704168656f07249ff701235f3bda1. Sep 12 17:48:36.691585 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:48:36.702437 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:48:36.732201 containerd[1572]: time="2025-09-12T17:48:36.732133082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xpdrs,Uid:69995e6d-5308-42a1-b40a-99caa93aa87e,Namespace:kube-system,Attempt:0,} returns sandbox id \"525fcf7e14d4166598b5ba6c12fc644a64d4de3b74af54aa777b58ec0f34a692\"" Sep 12 17:48:36.733961 kubelet[2773]: E0912 17:48:36.733915 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:36.736328 containerd[1572]: time="2025-09-12T17:48:36.736267278Z" level=info msg="CreateContainer within sandbox \"525fcf7e14d4166598b5ba6c12fc644a64d4de3b74af54aa777b58ec0f34a692\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:48:36.780103 containerd[1572]: time="2025-09-12T17:48:36.779031397Z" level=info msg="Container 26503693308933cba171786d28f6fc914ca14ddf6363ab919c7e91dd70047a3a: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:36.782327 containerd[1572]: time="2025-09-12T17:48:36.782263361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f46xf,Uid:c97f611a-323d-49ed-a2d2-6965cf06eeb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"47e3080b97997e951454e378d6f2cb11ab1704168656f07249ff701235f3bda1\"" Sep 12 17:48:36.783293 kubelet[2773]: E0912 17:48:36.783247 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:36.786428 containerd[1572]: time="2025-09-12T17:48:36.786375455Z" level=info msg="CreateContainer within sandbox \"47e3080b97997e951454e378d6f2cb11ab1704168656f07249ff701235f3bda1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:48:36.799664 containerd[1572]: time="2025-09-12T17:48:36.799597842Z" level=info msg="CreateContainer within sandbox \"525fcf7e14d4166598b5ba6c12fc644a64d4de3b74af54aa777b58ec0f34a692\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"26503693308933cba171786d28f6fc914ca14ddf6363ab919c7e91dd70047a3a\"" Sep 12 17:48:36.804685 containerd[1572]: time="2025-09-12T17:48:36.804629563Z" level=info msg="StartContainer for \"26503693308933cba171786d28f6fc914ca14ddf6363ab919c7e91dd70047a3a\"" Sep 12 17:48:36.805633 containerd[1572]: time="2025-09-12T17:48:36.805604602Z" level=info msg="connecting to shim 26503693308933cba171786d28f6fc914ca14ddf6363ab919c7e91dd70047a3a" address="unix:///run/containerd/s/eeb58f0b242fe6c2df7bc3178a14fa93650dd2d7fa7724016ee5ce9fb7d4960a" protocol=ttrpc version=3 Sep 12 17:48:36.807061 containerd[1572]: time="2025-09-12T17:48:36.807022361Z" level=info msg="Container ab506e6636e286d6f37d3f2d2a03d936e2b72ad847fed1f514d98842e9f505de: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:36.817264 containerd[1572]: time="2025-09-12T17:48:36.817215245Z" level=info msg="CreateContainer within sandbox \"47e3080b97997e951454e378d6f2cb11ab1704168656f07249ff701235f3bda1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab506e6636e286d6f37d3f2d2a03d936e2b72ad847fed1f514d98842e9f505de\"" Sep 12 17:48:36.818057 containerd[1572]: time="2025-09-12T17:48:36.818009714Z" level=info msg="StartContainer for \"ab506e6636e286d6f37d3f2d2a03d936e2b72ad847fed1f514d98842e9f505de\"" Sep 12 17:48:36.818854 containerd[1572]: time="2025-09-12T17:48:36.818828441Z" level=info msg="connecting to shim ab506e6636e286d6f37d3f2d2a03d936e2b72ad847fed1f514d98842e9f505de" address="unix:///run/containerd/s/bd24b312be4cbfd90cb397df3cb46ee183fd87469358e2922861977b1be66dce" protocol=ttrpc version=3 Sep 12 17:48:36.837657 systemd[1]: Started cri-containerd-26503693308933cba171786d28f6fc914ca14ddf6363ab919c7e91dd70047a3a.scope - libcontainer container 26503693308933cba171786d28f6fc914ca14ddf6363ab919c7e91dd70047a3a. Sep 12 17:48:36.843332 systemd[1]: Started cri-containerd-ab506e6636e286d6f37d3f2d2a03d936e2b72ad847fed1f514d98842e9f505de.scope - libcontainer container ab506e6636e286d6f37d3f2d2a03d936e2b72ad847fed1f514d98842e9f505de. Sep 12 17:48:36.890928 containerd[1572]: time="2025-09-12T17:48:36.890888633Z" level=info msg="StartContainer for \"26503693308933cba171786d28f6fc914ca14ddf6363ab919c7e91dd70047a3a\" returns successfully" Sep 12 17:48:36.891904 containerd[1572]: time="2025-09-12T17:48:36.891805273Z" level=info msg="StartContainer for \"ab506e6636e286d6f37d3f2d2a03d936e2b72ad847fed1f514d98842e9f505de\" returns successfully" Sep 12 17:48:37.777424 kubelet[2773]: E0912 17:48:37.776778 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:37.779301 kubelet[2773]: E0912 17:48:37.779237 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:37.789079 kubelet[2773]: I0912 17:48:37.788955 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xpdrs" podStartSLOduration=32.788936415 podStartE2EDuration="32.788936415s" podCreationTimestamp="2025-09-12 17:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:37.787625977 +0000 UTC m=+41.336063710" watchObservedRunningTime="2025-09-12 17:48:37.788936415 +0000 UTC m=+41.337374128" Sep 12 17:48:37.810435 kubelet[2773]: I0912 17:48:37.809944 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f46xf" podStartSLOduration=32.809911465 podStartE2EDuration="32.809911465s" podCreationTimestamp="2025-09-12 17:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:37.797604396 +0000 UTC m=+41.346042119" watchObservedRunningTime="2025-09-12 17:48:37.809911465 +0000 UTC m=+41.358349188" Sep 12 17:48:38.780781 kubelet[2773]: E0912 17:48:38.780739 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:38.780968 kubelet[2773]: E0912 17:48:38.780750 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:39.783311 kubelet[2773]: E0912 17:48:39.783273 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:39.783786 kubelet[2773]: E0912 17:48:39.783416 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:44.440732 systemd[1]: Started sshd@9-10.0.0.100:22-10.0.0.1:45766.service - OpenSSH per-connection server daemon (10.0.0.1:45766). Sep 12 17:48:44.518286 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 45766 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:44.520155 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:44.525001 systemd-logind[1557]: New session 10 of user core. Sep 12 17:48:44.532534 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:48:44.674145 sshd[4101]: Connection closed by 10.0.0.1 port 45766 Sep 12 17:48:44.674470 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:44.678475 systemd[1]: sshd@9-10.0.0.100:22-10.0.0.1:45766.service: Deactivated successfully. Sep 12 17:48:44.680474 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:48:44.681259 systemd-logind[1557]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:48:44.682546 systemd-logind[1557]: Removed session 10. Sep 12 17:48:49.687841 systemd[1]: Started sshd@10-10.0.0.100:22-10.0.0.1:45768.service - OpenSSH per-connection server daemon (10.0.0.1:45768). Sep 12 17:48:49.750227 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 45768 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:49.752036 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:49.756545 systemd-logind[1557]: New session 11 of user core. Sep 12 17:48:49.765575 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:48:49.883200 sshd[4122]: Connection closed by 10.0.0.1 port 45768 Sep 12 17:48:49.883665 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:49.888534 systemd[1]: sshd@10-10.0.0.100:22-10.0.0.1:45768.service: Deactivated successfully. Sep 12 17:48:49.890570 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:48:49.891357 systemd-logind[1557]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:48:49.893074 systemd-logind[1557]: Removed session 11. Sep 12 17:48:54.904664 systemd[1]: Started sshd@11-10.0.0.100:22-10.0.0.1:42988.service - OpenSSH per-connection server daemon (10.0.0.1:42988). Sep 12 17:48:54.969331 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 42988 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:54.971584 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:54.977365 systemd-logind[1557]: New session 12 of user core. Sep 12 17:48:54.991643 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:48:55.113053 sshd[4139]: Connection closed by 10.0.0.1 port 42988 Sep 12 17:48:55.113490 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:55.118737 systemd[1]: sshd@11-10.0.0.100:22-10.0.0.1:42988.service: Deactivated successfully. Sep 12 17:48:55.120792 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:48:55.121770 systemd-logind[1557]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:48:55.122999 systemd-logind[1557]: Removed session 12. Sep 12 17:49:00.134647 systemd[1]: Started sshd@12-10.0.0.100:22-10.0.0.1:59334.service - OpenSSH per-connection server daemon (10.0.0.1:59334). Sep 12 17:49:00.190063 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 59334 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:00.191639 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:00.196670 systemd-logind[1557]: New session 13 of user core. Sep 12 17:49:00.210586 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:49:00.347278 sshd[4158]: Connection closed by 10.0.0.1 port 59334 Sep 12 17:49:00.347638 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:00.352496 systemd[1]: sshd@12-10.0.0.100:22-10.0.0.1:59334.service: Deactivated successfully. Sep 12 17:49:00.354659 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:49:00.355373 systemd-logind[1557]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:49:00.356379 systemd-logind[1557]: Removed session 13. Sep 12 17:49:05.373438 systemd[1]: Started sshd@13-10.0.0.100:22-10.0.0.1:59346.service - OpenSSH per-connection server daemon (10.0.0.1:59346). Sep 12 17:49:05.430313 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 59346 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:05.432156 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:05.437468 systemd-logind[1557]: New session 14 of user core. Sep 12 17:49:05.456707 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:49:05.584436 sshd[4175]: Connection closed by 10.0.0.1 port 59346 Sep 12 17:49:05.584934 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:05.599065 systemd[1]: sshd@13-10.0.0.100:22-10.0.0.1:59346.service: Deactivated successfully. Sep 12 17:49:05.601740 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:49:05.602790 systemd-logind[1557]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:49:05.606711 systemd[1]: Started sshd@14-10.0.0.100:22-10.0.0.1:59358.service - OpenSSH per-connection server daemon (10.0.0.1:59358). Sep 12 17:49:05.607867 systemd-logind[1557]: Removed session 14. Sep 12 17:49:05.685107 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 59358 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:05.687035 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:05.692194 systemd-logind[1557]: New session 15 of user core. Sep 12 17:49:05.710643 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:49:05.875361 sshd[4192]: Connection closed by 10.0.0.1 port 59358 Sep 12 17:49:05.876611 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:05.887456 systemd[1]: sshd@14-10.0.0.100:22-10.0.0.1:59358.service: Deactivated successfully. Sep 12 17:49:05.891193 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:49:05.892987 systemd-logind[1557]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:49:05.899535 systemd[1]: Started sshd@15-10.0.0.100:22-10.0.0.1:59372.service - OpenSSH per-connection server daemon (10.0.0.1:59372). Sep 12 17:49:05.901260 systemd-logind[1557]: Removed session 15. Sep 12 17:49:05.959426 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 59372 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:05.961420 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:05.967013 systemd-logind[1557]: New session 16 of user core. Sep 12 17:49:05.980581 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:49:06.097138 sshd[4206]: Connection closed by 10.0.0.1 port 59372 Sep 12 17:49:06.097529 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:06.102461 systemd[1]: sshd@15-10.0.0.100:22-10.0.0.1:59372.service: Deactivated successfully. Sep 12 17:49:06.104681 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:49:06.105619 systemd-logind[1557]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:49:06.106863 systemd-logind[1557]: Removed session 16. Sep 12 17:49:11.116724 systemd[1]: Started sshd@16-10.0.0.100:22-10.0.0.1:51654.service - OpenSSH per-connection server daemon (10.0.0.1:51654). Sep 12 17:49:11.179507 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 51654 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:11.181314 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:11.185752 systemd-logind[1557]: New session 17 of user core. Sep 12 17:49:11.195547 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:49:11.312018 sshd[4225]: Connection closed by 10.0.0.1 port 51654 Sep 12 17:49:11.312466 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:11.316987 systemd[1]: sshd@16-10.0.0.100:22-10.0.0.1:51654.service: Deactivated successfully. Sep 12 17:49:11.319766 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:49:11.322466 systemd-logind[1557]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:49:11.323979 systemd-logind[1557]: Removed session 17. Sep 12 17:49:16.326158 systemd[1]: Started sshd@17-10.0.0.100:22-10.0.0.1:51656.service - OpenSSH per-connection server daemon (10.0.0.1:51656). Sep 12 17:49:16.374183 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 51656 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:16.375639 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:16.380771 systemd-logind[1557]: New session 18 of user core. Sep 12 17:49:16.390538 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:49:16.504679 sshd[4241]: Connection closed by 10.0.0.1 port 51656 Sep 12 17:49:16.505169 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:16.522710 systemd[1]: sshd@17-10.0.0.100:22-10.0.0.1:51656.service: Deactivated successfully. Sep 12 17:49:16.524948 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:49:16.525898 systemd-logind[1557]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:49:16.529123 systemd[1]: Started sshd@18-10.0.0.100:22-10.0.0.1:51670.service - OpenSSH per-connection server daemon (10.0.0.1:51670). Sep 12 17:49:16.530450 systemd-logind[1557]: Removed session 18. Sep 12 17:49:16.598234 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 51670 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:16.600063 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:16.605582 systemd-logind[1557]: New session 19 of user core. Sep 12 17:49:16.612628 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:49:16.884456 sshd[4257]: Connection closed by 10.0.0.1 port 51670 Sep 12 17:49:16.887315 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:16.895193 systemd[1]: sshd@18-10.0.0.100:22-10.0.0.1:51670.service: Deactivated successfully. Sep 12 17:49:16.898955 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:49:16.899877 systemd-logind[1557]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:49:16.903526 systemd[1]: Started sshd@19-10.0.0.100:22-10.0.0.1:51680.service - OpenSSH per-connection server daemon (10.0.0.1:51680). Sep 12 17:49:16.904369 systemd-logind[1557]: Removed session 19. Sep 12 17:49:16.998860 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 51680 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:17.000841 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:17.006193 systemd-logind[1557]: New session 20 of user core. Sep 12 17:49:17.019612 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:49:17.594105 sshd[4271]: Connection closed by 10.0.0.1 port 51680 Sep 12 17:49:17.594663 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:17.607239 systemd[1]: sshd@19-10.0.0.100:22-10.0.0.1:51680.service: Deactivated successfully. Sep 12 17:49:17.609685 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:49:17.610756 systemd-logind[1557]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:49:17.614153 systemd[1]: Started sshd@20-10.0.0.100:22-10.0.0.1:51692.service - OpenSSH per-connection server daemon (10.0.0.1:51692). Sep 12 17:49:17.615999 systemd-logind[1557]: Removed session 20. Sep 12 17:49:17.677042 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 51692 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:17.679353 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:17.684432 systemd-logind[1557]: New session 21 of user core. Sep 12 17:49:17.691661 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:49:18.012187 sshd[4297]: Connection closed by 10.0.0.1 port 51692 Sep 12 17:49:18.012658 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:18.025844 systemd[1]: sshd@20-10.0.0.100:22-10.0.0.1:51692.service: Deactivated successfully. Sep 12 17:49:18.028638 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:49:18.029902 systemd-logind[1557]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:49:18.034156 systemd[1]: Started sshd@21-10.0.0.100:22-10.0.0.1:51708.service - OpenSSH per-connection server daemon (10.0.0.1:51708). Sep 12 17:49:18.035003 systemd-logind[1557]: Removed session 21. Sep 12 17:49:18.091539 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 51708 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:18.093340 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:18.098462 systemd-logind[1557]: New session 22 of user core. Sep 12 17:49:18.107571 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:49:18.227090 sshd[4312]: Connection closed by 10.0.0.1 port 51708 Sep 12 17:49:18.227521 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:18.232904 systemd[1]: sshd@21-10.0.0.100:22-10.0.0.1:51708.service: Deactivated successfully. Sep 12 17:49:18.235270 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:49:18.236307 systemd-logind[1557]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:49:18.237691 systemd-logind[1557]: Removed session 22. Sep 12 17:49:18.534662 kubelet[2773]: E0912 17:49:18.534574 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:23.241711 systemd[1]: Started sshd@22-10.0.0.100:22-10.0.0.1:35098.service - OpenSSH per-connection server daemon (10.0.0.1:35098). Sep 12 17:49:23.308980 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 35098 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:23.310992 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:23.315987 systemd-logind[1557]: New session 23 of user core. Sep 12 17:49:23.326639 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:49:23.446350 sshd[4328]: Connection closed by 10.0.0.1 port 35098 Sep 12 17:49:23.446697 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:23.451623 systemd[1]: sshd@22-10.0.0.100:22-10.0.0.1:35098.service: Deactivated successfully. Sep 12 17:49:23.453820 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:49:23.454718 systemd-logind[1557]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:49:23.456027 systemd-logind[1557]: Removed session 23. Sep 12 17:49:28.459683 systemd[1]: Started sshd@23-10.0.0.100:22-10.0.0.1:35112.service - OpenSSH per-connection server daemon (10.0.0.1:35112). Sep 12 17:49:28.528914 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 35112 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:28.531091 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:28.536539 systemd-logind[1557]: New session 24 of user core. Sep 12 17:49:28.544532 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:49:28.780656 sshd[4347]: Connection closed by 10.0.0.1 port 35112 Sep 12 17:49:28.781055 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:28.785326 systemd[1]: sshd@23-10.0.0.100:22-10.0.0.1:35112.service: Deactivated successfully. Sep 12 17:49:28.787353 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:49:28.788251 systemd-logind[1557]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:49:28.789439 systemd-logind[1557]: Removed session 24. Sep 12 17:49:31.535730 kubelet[2773]: E0912 17:49:31.535658 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:31.536245 kubelet[2773]: E0912 17:49:31.536022 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:32.534623 kubelet[2773]: E0912 17:49:32.534572 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:33.798417 systemd[1]: Started sshd@24-10.0.0.100:22-10.0.0.1:60062.service - OpenSSH per-connection server daemon (10.0.0.1:60062). Sep 12 17:49:33.861641 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 60062 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:33.863334 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:33.868034 systemd-logind[1557]: New session 25 of user core. Sep 12 17:49:33.877558 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:49:34.057940 sshd[4363]: Connection closed by 10.0.0.1 port 60062 Sep 12 17:49:34.058190 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:34.062608 systemd[1]: sshd@24-10.0.0.100:22-10.0.0.1:60062.service: Deactivated successfully. Sep 12 17:49:34.064552 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:49:34.066624 systemd-logind[1557]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:49:34.067552 systemd-logind[1557]: Removed session 25. Sep 12 17:49:36.461043 update_engine[1563]: I20250912 17:49:36.460942 1563 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 12 17:49:36.461043 update_engine[1563]: I20250912 17:49:36.461004 1563 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 12 17:49:36.461569 update_engine[1563]: I20250912 17:49:36.461309 1563 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 12 17:49:36.461922 update_engine[1563]: I20250912 17:49:36.461884 1563 omaha_request_params.cc:62] Current group set to beta Sep 12 17:49:36.463586 update_engine[1563]: I20250912 17:49:36.463543 1563 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 12 17:49:36.463586 update_engine[1563]: I20250912 17:49:36.463567 1563 update_attempter.cc:643] Scheduling an action processor start. Sep 12 17:49:36.463586 update_engine[1563]: I20250912 17:49:36.463588 1563 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:49:36.463779 update_engine[1563]: I20250912 17:49:36.463634 1563 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 12 17:49:36.463779 update_engine[1563]: I20250912 17:49:36.463711 1563 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:49:36.463779 update_engine[1563]: I20250912 17:49:36.463721 1563 omaha_request_action.cc:272] Request: Sep 12 17:49:36.463779 update_engine[1563]: Sep 12 17:49:36.463779 update_engine[1563]: Sep 12 17:49:36.463779 update_engine[1563]: Sep 12 17:49:36.463779 update_engine[1563]: Sep 12 17:49:36.463779 update_engine[1563]: Sep 12 17:49:36.463779 update_engine[1563]: Sep 12 17:49:36.463779 update_engine[1563]: Sep 12 17:49:36.463779 update_engine[1563]: Sep 12 17:49:36.463779 update_engine[1563]: I20250912 17:49:36.463729 1563 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:49:36.466977 locksmithd[1604]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 12 17:49:36.469249 update_engine[1563]: I20250912 17:49:36.468673 1563 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:49:36.470233 update_engine[1563]: I20250912 17:49:36.470151 1563 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:49:36.480047 update_engine[1563]: E20250912 17:49:36.479976 1563 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:49:36.480205 update_engine[1563]: I20250912 17:49:36.480109 1563 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 12 17:49:36.534848 kubelet[2773]: E0912 17:49:36.534781 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:39.074510 systemd[1]: Started sshd@25-10.0.0.100:22-10.0.0.1:60076.service - OpenSSH per-connection server daemon (10.0.0.1:60076). Sep 12 17:49:39.136217 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 60076 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:39.138308 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:39.143875 systemd-logind[1557]: New session 26 of user core. Sep 12 17:49:39.153644 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:49:39.282220 sshd[4381]: Connection closed by 10.0.0.1 port 60076 Sep 12 17:49:39.282681 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:39.294160 systemd[1]: sshd@25-10.0.0.100:22-10.0.0.1:60076.service: Deactivated successfully. Sep 12 17:49:39.296884 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:49:39.297963 systemd-logind[1557]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:49:39.301961 systemd[1]: Started sshd@26-10.0.0.100:22-10.0.0.1:60082.service - OpenSSH per-connection server daemon (10.0.0.1:60082). Sep 12 17:49:39.303048 systemd-logind[1557]: Removed session 26. Sep 12 17:49:39.372368 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 60082 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:39.374758 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:39.381648 systemd-logind[1557]: New session 27 of user core. Sep 12 17:49:39.392757 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:49:40.761754 containerd[1572]: time="2025-09-12T17:49:40.760375294Z" level=info msg="StopContainer for \"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\" with timeout 30 (s)" Sep 12 17:49:40.770583 containerd[1572]: time="2025-09-12T17:49:40.770535815Z" level=info msg="Stop container \"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\" with signal terminated" Sep 12 17:49:40.785359 systemd[1]: cri-containerd-27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b.scope: Deactivated successfully. Sep 12 17:49:40.789867 containerd[1572]: time="2025-09-12T17:49:40.789819355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\" id:\"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\" pid:3376 exited_at:{seconds:1757699380 nanos:789199394}" Sep 12 17:49:40.790210 containerd[1572]: time="2025-09-12T17:49:40.790165519Z" level=info msg="received exit event container_id:\"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\" id:\"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\" pid:3376 exited_at:{seconds:1757699380 nanos:789199394}" Sep 12 17:49:40.805944 containerd[1572]: time="2025-09-12T17:49:40.805894101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" id:\"fa04231469a5ced575fcde10d5f5ee0b0f8a7155480a88220ad6d198feac9ce7\" pid:4425 exited_at:{seconds:1757699380 nanos:805268208}" Sep 12 17:49:40.809166 containerd[1572]: time="2025-09-12T17:49:40.808727405Z" level=info msg="StopContainer for \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" with timeout 2 (s)" Sep 12 17:49:40.809166 containerd[1572]: time="2025-09-12T17:49:40.809066776Z" level=info msg="Stop container \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" with signal terminated" Sep 12 17:49:40.809344 containerd[1572]: time="2025-09-12T17:49:40.809216990Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:49:40.817992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b-rootfs.mount: Deactivated successfully. Sep 12 17:49:40.821196 systemd-networkd[1489]: lxc_health: Link DOWN Sep 12 17:49:40.821206 systemd-networkd[1489]: lxc_health: Lost carrier Sep 12 17:49:40.850981 systemd[1]: cri-containerd-b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017.scope: Deactivated successfully. Sep 12 17:49:40.851375 systemd[1]: cri-containerd-b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017.scope: Consumed 7.190s CPU time, 120.8M memory peak, 416K read from disk, 13.3M written to disk. Sep 12 17:49:40.854716 containerd[1572]: time="2025-09-12T17:49:40.854507506Z" level=info msg="received exit event container_id:\"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" id:\"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" pid:3410 exited_at:{seconds:1757699380 nanos:853978466}" Sep 12 17:49:40.854716 containerd[1572]: time="2025-09-12T17:49:40.854670673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" id:\"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" pid:3410 exited_at:{seconds:1757699380 nanos:853978466}" Sep 12 17:49:40.867383 containerd[1572]: time="2025-09-12T17:49:40.867337841Z" level=info msg="StopContainer for \"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\" returns successfully" Sep 12 17:49:40.879278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017-rootfs.mount: Deactivated successfully. Sep 12 17:49:41.034248 containerd[1572]: time="2025-09-12T17:49:41.034111234Z" level=info msg="StopPodSandbox for \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\"" Sep 12 17:49:41.043149 containerd[1572]: time="2025-09-12T17:49:41.043062607Z" level=info msg="Container to stop \"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:49:41.050565 systemd[1]: cri-containerd-fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4.scope: Deactivated successfully. Sep 12 17:49:41.052433 containerd[1572]: time="2025-09-12T17:49:41.052378919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\" id:\"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\" pid:2991 exit_status:137 exited_at:{seconds:1757699381 nanos:51595599}" Sep 12 17:49:41.082035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4-rootfs.mount: Deactivated successfully. Sep 12 17:49:41.211341 containerd[1572]: time="2025-09-12T17:49:41.211294661Z" level=info msg="shim disconnected" id=fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4 namespace=k8s.io Sep 12 17:49:41.211341 containerd[1572]: time="2025-09-12T17:49:41.211328705Z" level=warning msg="cleaning up after shim disconnected" id=fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4 namespace=k8s.io Sep 12 17:49:41.235600 containerd[1572]: time="2025-09-12T17:49:41.211336189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:49:41.235766 containerd[1572]: time="2025-09-12T17:49:41.219202612Z" level=info msg="StopContainer for \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" returns successfully" Sep 12 17:49:41.236276 containerd[1572]: time="2025-09-12T17:49:41.236246786Z" level=info msg="StopPodSandbox for \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\"" Sep 12 17:49:41.236355 containerd[1572]: time="2025-09-12T17:49:41.236335784Z" level=info msg="Container to stop \"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:49:41.236415 containerd[1572]: time="2025-09-12T17:49:41.236354098Z" level=info msg="Container to stop \"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:49:41.236415 containerd[1572]: time="2025-09-12T17:49:41.236364798Z" level=info msg="Container to stop \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:49:41.236415 containerd[1572]: time="2025-09-12T17:49:41.236376450Z" level=info msg="Container to stop \"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:49:41.236508 containerd[1572]: time="2025-09-12T17:49:41.236418741Z" level=info msg="Container to stop \"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:49:41.243229 systemd[1]: cri-containerd-a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579.scope: Deactivated successfully. Sep 12 17:49:41.266078 containerd[1572]: time="2025-09-12T17:49:41.265973652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" id:\"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" pid:2931 exit_status:137 exited_at:{seconds:1757699381 nanos:244016256}" Sep 12 17:49:41.268608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4-shm.mount: Deactivated successfully. Sep 12 17:49:41.268751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579-rootfs.mount: Deactivated successfully. Sep 12 17:49:41.271542 containerd[1572]: time="2025-09-12T17:49:41.270512699Z" level=info msg="received exit event sandbox_id:\"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\" exit_status:137 exited_at:{seconds:1757699381 nanos:51595599}" Sep 12 17:49:41.273932 containerd[1572]: time="2025-09-12T17:49:41.273898777Z" level=info msg="TearDown network for sandbox \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\" successfully" Sep 12 17:49:41.273932 containerd[1572]: time="2025-09-12T17:49:41.273925697Z" level=info msg="StopPodSandbox for \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\" returns successfully" Sep 12 17:49:41.275950 containerd[1572]: time="2025-09-12T17:49:41.275908995Z" level=info msg="received exit event sandbox_id:\"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" exit_status:137 exited_at:{seconds:1757699381 nanos:244016256}" Sep 12 17:49:41.276610 containerd[1572]: time="2025-09-12T17:49:41.276479332Z" level=info msg="shim disconnected" id=a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579 namespace=k8s.io Sep 12 17:49:41.276610 containerd[1572]: time="2025-09-12T17:49:41.276502096Z" level=warning msg="cleaning up after shim disconnected" id=a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579 namespace=k8s.io Sep 12 17:49:41.276610 containerd[1572]: time="2025-09-12T17:49:41.276509149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:49:41.276824 containerd[1572]: time="2025-09-12T17:49:41.276711812Z" level=info msg="TearDown network for sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" successfully" Sep 12 17:49:41.276824 containerd[1572]: time="2025-09-12T17:49:41.276734855Z" level=info msg="StopPodSandbox for \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" returns successfully" Sep 12 17:49:41.302813 kubelet[2773]: I0912 17:49:41.302568 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4pmq\" (UniqueName: \"kubernetes.io/projected/fe04a8aa-43fe-421f-a03f-d2949bfd6a01-kube-api-access-l4pmq\") pod \"fe04a8aa-43fe-421f-a03f-d2949bfd6a01\" (UID: \"fe04a8aa-43fe-421f-a03f-d2949bfd6a01\") " Sep 12 17:49:41.302813 kubelet[2773]: I0912 17:49:41.302615 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe04a8aa-43fe-421f-a03f-d2949bfd6a01-cilium-config-path\") pod \"fe04a8aa-43fe-421f-a03f-d2949bfd6a01\" (UID: \"fe04a8aa-43fe-421f-a03f-d2949bfd6a01\") " Sep 12 17:49:41.306346 kubelet[2773]: I0912 17:49:41.306315 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe04a8aa-43fe-421f-a03f-d2949bfd6a01-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fe04a8aa-43fe-421f-a03f-d2949bfd6a01" (UID: "fe04a8aa-43fe-421f-a03f-d2949bfd6a01"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:49:41.309140 kubelet[2773]: I0912 17:49:41.309062 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe04a8aa-43fe-421f-a03f-d2949bfd6a01-kube-api-access-l4pmq" (OuterVolumeSpecName: "kube-api-access-l4pmq") pod "fe04a8aa-43fe-421f-a03f-d2949bfd6a01" (UID: "fe04a8aa-43fe-421f-a03f-d2949bfd6a01"). InnerVolumeSpecName "kube-api-access-l4pmq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:49:41.402919 kubelet[2773]: I0912 17:49:41.402843 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-host-proc-sys-kernel\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.402919 kubelet[2773]: I0912 17:49:41.402908 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-lib-modules\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403155 kubelet[2773]: I0912 17:49:41.402944 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5ff8fbb-8afd-4b9c-8110-71997e046f75-hubble-tls\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403155 kubelet[2773]: I0912 17:49:41.402965 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cilium-run\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403155 kubelet[2773]: I0912 17:49:41.402989 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5ff8fbb-8afd-4b9c-8110-71997e046f75-clustermesh-secrets\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403155 kubelet[2773]: I0912 17:49:41.403008 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-host-proc-sys-net\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403155 kubelet[2773]: I0912 17:49:41.403029 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cilium-config-path\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403155 kubelet[2773]: I0912 17:49:41.403022 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:49:41.403372 kubelet[2773]: I0912 17:49:41.403051 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cilium-cgroup\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403372 kubelet[2773]: I0912 17:49:41.403105 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:49:41.403372 kubelet[2773]: I0912 17:49:41.403145 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:49:41.403372 kubelet[2773]: I0912 17:49:41.403155 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-hostproc\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403372 kubelet[2773]: I0912 17:49:41.403182 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdtdj\" (UniqueName: \"kubernetes.io/projected/e5ff8fbb-8afd-4b9c-8110-71997e046f75-kube-api-access-xdtdj\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403584 kubelet[2773]: I0912 17:49:41.403204 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-xtables-lock\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403584 kubelet[2773]: I0912 17:49:41.403223 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-bpf-maps\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403584 kubelet[2773]: I0912 17:49:41.403237 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-etc-cni-netd\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403584 kubelet[2773]: I0912 17:49:41.403255 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cni-path\") pod \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\" (UID: \"e5ff8fbb-8afd-4b9c-8110-71997e046f75\") " Sep 12 17:49:41.403584 kubelet[2773]: I0912 17:49:41.403307 2773 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe04a8aa-43fe-421f-a03f-d2949bfd6a01-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.403584 kubelet[2773]: I0912 17:49:41.403316 2773 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.403584 kubelet[2773]: I0912 17:49:41.403324 2773 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.403878 kubelet[2773]: I0912 17:49:41.403332 2773 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.403878 kubelet[2773]: I0912 17:49:41.403341 2773 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l4pmq\" (UniqueName: \"kubernetes.io/projected/fe04a8aa-43fe-421f-a03f-d2949bfd6a01-kube-api-access-l4pmq\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.403878 kubelet[2773]: I0912 17:49:41.403364 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cni-path" (OuterVolumeSpecName: "cni-path") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:49:41.403878 kubelet[2773]: I0912 17:49:41.403381 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-hostproc" (OuterVolumeSpecName: "hostproc") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:49:41.403878 kubelet[2773]: I0912 17:49:41.403448 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:49:41.405046 kubelet[2773]: I0912 17:49:41.404954 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:49:41.405046 kubelet[2773]: I0912 17:49:41.405014 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:49:41.405357 kubelet[2773]: I0912 17:49:41.405298 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:49:41.405357 kubelet[2773]: I0912 17:49:41.405331 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:49:41.406696 kubelet[2773]: I0912 17:49:41.406656 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ff8fbb-8afd-4b9c-8110-71997e046f75-kube-api-access-xdtdj" (OuterVolumeSpecName: "kube-api-access-xdtdj") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "kube-api-access-xdtdj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:49:41.407179 kubelet[2773]: I0912 17:49:41.407142 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ff8fbb-8afd-4b9c-8110-71997e046f75-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:49:41.407745 kubelet[2773]: I0912 17:49:41.407718 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ff8fbb-8afd-4b9c-8110-71997e046f75-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:49:41.408332 kubelet[2773]: I0912 17:49:41.408237 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e5ff8fbb-8afd-4b9c-8110-71997e046f75" (UID: "e5ff8fbb-8afd-4b9c-8110-71997e046f75"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:49:41.503598 kubelet[2773]: I0912 17:49:41.503538 2773 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.503598 kubelet[2773]: I0912 17:49:41.503575 2773 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5ff8fbb-8afd-4b9c-8110-71997e046f75-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.503598 kubelet[2773]: I0912 17:49:41.503587 2773 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.503598 kubelet[2773]: I0912 17:49:41.503596 2773 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.503598 kubelet[2773]: I0912 17:49:41.503607 2773 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5ff8fbb-8afd-4b9c-8110-71997e046f75-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.503598 kubelet[2773]: I0912 17:49:41.503616 2773 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.503937 kubelet[2773]: I0912 17:49:41.503624 2773 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xdtdj\" (UniqueName: \"kubernetes.io/projected/e5ff8fbb-8afd-4b9c-8110-71997e046f75-kube-api-access-xdtdj\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.503937 kubelet[2773]: I0912 17:49:41.503634 2773 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.503937 kubelet[2773]: I0912 17:49:41.503642 2773 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.503937 kubelet[2773]: I0912 17:49:41.503656 2773 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.503937 kubelet[2773]: I0912 17:49:41.503670 2773 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5ff8fbb-8afd-4b9c-8110-71997e046f75-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:49:41.585915 kubelet[2773]: E0912 17:49:41.585870 2773 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:49:41.818342 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579-shm.mount: Deactivated successfully. Sep 12 17:49:41.818486 systemd[1]: var-lib-kubelet-pods-fe04a8aa\x2d43fe\x2d421f\x2da03f\x2dd2949bfd6a01-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl4pmq.mount: Deactivated successfully. Sep 12 17:49:41.818588 systemd[1]: var-lib-kubelet-pods-e5ff8fbb\x2d8afd\x2d4b9c\x2d8110\x2d71997e046f75-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxdtdj.mount: Deactivated successfully. Sep 12 17:49:41.818676 systemd[1]: var-lib-kubelet-pods-e5ff8fbb\x2d8afd\x2d4b9c\x2d8110\x2d71997e046f75-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:49:41.818764 systemd[1]: var-lib-kubelet-pods-e5ff8fbb\x2d8afd\x2d4b9c\x2d8110\x2d71997e046f75-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:49:41.947229 kubelet[2773]: I0912 17:49:41.947043 2773 scope.go:117] "RemoveContainer" containerID="27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b" Sep 12 17:49:41.949708 containerd[1572]: time="2025-09-12T17:49:41.949642340Z" level=info msg="RemoveContainer for \"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\"" Sep 12 17:49:41.954245 systemd[1]: Removed slice kubepods-besteffort-podfe04a8aa_43fe_421f_a03f_d2949bfd6a01.slice - libcontainer container kubepods-besteffort-podfe04a8aa_43fe_421f_a03f_d2949bfd6a01.slice. Sep 12 17:49:41.973553 systemd[1]: Removed slice kubepods-burstable-pode5ff8fbb_8afd_4b9c_8110_71997e046f75.slice - libcontainer container kubepods-burstable-pode5ff8fbb_8afd_4b9c_8110_71997e046f75.slice. Sep 12 17:49:41.973664 systemd[1]: kubepods-burstable-pode5ff8fbb_8afd_4b9c_8110_71997e046f75.slice: Consumed 7.298s CPU time, 121.2M memory peak, 428K read from disk, 15.4M written to disk. Sep 12 17:49:42.059750 containerd[1572]: time="2025-09-12T17:49:42.059679223Z" level=info msg="RemoveContainer for \"27572e5a457050e67ceec90ba9c704a430ca196c3449edfd62832295b87e785b\" returns successfully" Sep 12 17:49:42.060109 kubelet[2773]: I0912 17:49:42.060062 2773 scope.go:117] "RemoveContainer" containerID="b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017" Sep 12 17:49:42.062142 containerd[1572]: time="2025-09-12T17:49:42.062106268Z" level=info msg="RemoveContainer for \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\"" Sep 12 17:49:42.302024 containerd[1572]: time="2025-09-12T17:49:42.301735652Z" level=info msg="RemoveContainer for \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" returns successfully" Sep 12 17:49:42.303028 kubelet[2773]: I0912 17:49:42.302149 2773 scope.go:117] "RemoveContainer" containerID="59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8" Sep 12 17:49:42.304140 containerd[1572]: time="2025-09-12T17:49:42.304101331Z" level=info msg="RemoveContainer for \"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\"" Sep 12 17:49:42.416800 containerd[1572]: time="2025-09-12T17:49:42.416751939Z" level=info msg="RemoveContainer for \"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\" returns successfully" Sep 12 17:49:42.417144 kubelet[2773]: I0912 17:49:42.417067 2773 scope.go:117] "RemoveContainer" containerID="1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1" Sep 12 17:49:42.419468 containerd[1572]: time="2025-09-12T17:49:42.419444796Z" level=info msg="RemoveContainer for \"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\"" Sep 12 17:49:42.528757 containerd[1572]: time="2025-09-12T17:49:42.528676785Z" level=info msg="RemoveContainer for \"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\" returns successfully" Sep 12 17:49:42.529116 kubelet[2773]: I0912 17:49:42.529043 2773 scope.go:117] "RemoveContainer" containerID="1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43" Sep 12 17:49:42.531309 containerd[1572]: time="2025-09-12T17:49:42.531267009Z" level=info msg="RemoveContainer for \"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\"" Sep 12 17:49:42.537736 kubelet[2773]: I0912 17:49:42.537662 2773 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe04a8aa-43fe-421f-a03f-d2949bfd6a01" path="/var/lib/kubelet/pods/fe04a8aa-43fe-421f-a03f-d2949bfd6a01/volumes" Sep 12 17:49:42.583107 containerd[1572]: time="2025-09-12T17:49:42.583020151Z" level=info msg="RemoveContainer for \"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\" returns successfully" Sep 12 17:49:42.583587 kubelet[2773]: I0912 17:49:42.583550 2773 scope.go:117] "RemoveContainer" containerID="9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246" Sep 12 17:49:42.585719 containerd[1572]: time="2025-09-12T17:49:42.585669585Z" level=info msg="RemoveContainer for \"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\"" Sep 12 17:49:42.745485 containerd[1572]: time="2025-09-12T17:49:42.745423489Z" level=info msg="RemoveContainer for \"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\" returns successfully" Sep 12 17:49:42.745839 kubelet[2773]: I0912 17:49:42.745792 2773 scope.go:117] "RemoveContainer" containerID="b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017" Sep 12 17:49:42.746221 containerd[1572]: time="2025-09-12T17:49:42.746171994Z" level=error msg="ContainerStatus for \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\": not found" Sep 12 17:49:42.747695 kubelet[2773]: E0912 17:49:42.747666 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\": not found" containerID="b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017" Sep 12 17:49:42.747775 kubelet[2773]: I0912 17:49:42.747697 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017"} err="failed to get container status \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6c62b7abbe592068aaf2e01a8af053493127e337546bb3022c083cac2aa8017\": not found" Sep 12 17:49:42.747775 kubelet[2773]: I0912 17:49:42.747774 2773 scope.go:117] "RemoveContainer" containerID="59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8" Sep 12 17:49:42.748001 containerd[1572]: time="2025-09-12T17:49:42.747965472Z" level=error msg="ContainerStatus for \"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\": not found" Sep 12 17:49:42.748203 kubelet[2773]: E0912 17:49:42.748176 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\": not found" containerID="59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8" Sep 12 17:49:42.748251 kubelet[2773]: I0912 17:49:42.748215 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8"} err="failed to get container status \"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\": rpc error: code = NotFound desc = an error occurred when try to find container \"59a92b718f34f6bc7b89db8b3bbf113c508874b79fb7aebdd00662c4be3ecac8\": not found" Sep 12 17:49:42.748251 kubelet[2773]: I0912 17:49:42.748247 2773 scope.go:117] "RemoveContainer" containerID="1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1" Sep 12 17:49:42.748488 containerd[1572]: time="2025-09-12T17:49:42.748453424Z" level=error msg="ContainerStatus for \"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\": not found" Sep 12 17:49:42.748605 kubelet[2773]: E0912 17:49:42.748572 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\": not found" containerID="1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1" Sep 12 17:49:42.748605 kubelet[2773]: I0912 17:49:42.748594 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1"} err="failed to get container status \"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ed7e2d524072876615e47b9866a786b74201d19eee4113e36ddf0af6de59fc1\": not found" Sep 12 17:49:42.748605 kubelet[2773]: I0912 17:49:42.748612 2773 scope.go:117] "RemoveContainer" containerID="1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43" Sep 12 17:49:42.748918 containerd[1572]: time="2025-09-12T17:49:42.748870852Z" level=error msg="ContainerStatus for \"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\": not found" Sep 12 17:49:42.749032 kubelet[2773]: E0912 17:49:42.749010 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\": not found" containerID="1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43" Sep 12 17:49:42.749075 kubelet[2773]: I0912 17:49:42.749031 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43"} err="failed to get container status \"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ae88b6a99835a944226e1c302498d2173ed5754757527ff41a8ed6d57ffdd43\": not found" Sep 12 17:49:42.749075 kubelet[2773]: I0912 17:49:42.749046 2773 scope.go:117] "RemoveContainer" containerID="9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246" Sep 12 17:49:42.749264 containerd[1572]: time="2025-09-12T17:49:42.749231273Z" level=error msg="ContainerStatus for \"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\": not found" Sep 12 17:49:42.749450 kubelet[2773]: E0912 17:49:42.749418 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\": not found" containerID="9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246" Sep 12 17:49:42.749507 kubelet[2773]: I0912 17:49:42.749460 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246"} err="failed to get container status \"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f98db2dd9e22b8eb9a20a5ee8314b7d46c2780b41afbb5fc995e0e1da82b246\": not found" Sep 12 17:49:42.909753 systemd[1]: Started sshd@27-10.0.0.100:22-10.0.0.1:34118.service - OpenSSH per-connection server daemon (10.0.0.1:34118). Sep 12 17:49:42.977598 sshd[4549]: Accepted publickey for core from 10.0.0.1 port 34118 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:42.979313 sshd-session[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:42.983901 systemd-logind[1557]: New session 28 of user core. Sep 12 17:49:42.994630 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:49:43.107351 sshd[4397]: Connection closed by 10.0.0.1 port 60082 Sep 12 17:49:43.107780 sshd-session[4394]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:43.113045 systemd[1]: sshd@26-10.0.0.100:22-10.0.0.1:60082.service: Deactivated successfully. Sep 12 17:49:43.115104 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:49:43.116248 systemd-logind[1557]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:49:43.118023 systemd-logind[1557]: Removed session 27. Sep 12 17:49:44.131774 sshd[4552]: Connection closed by 10.0.0.1 port 34118 Sep 12 17:49:44.132284 sshd-session[4549]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:44.148189 systemd[1]: sshd@27-10.0.0.100:22-10.0.0.1:34118.service: Deactivated successfully. Sep 12 17:49:44.151279 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:49:44.154453 systemd-logind[1557]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:49:44.156135 kubelet[2773]: I0912 17:49:44.155428 2773 memory_manager.go:355] "RemoveStaleState removing state" podUID="fe04a8aa-43fe-421f-a03f-d2949bfd6a01" containerName="cilium-operator" Sep 12 17:49:44.156135 kubelet[2773]: I0912 17:49:44.155476 2773 memory_manager.go:355] "RemoveStaleState removing state" podUID="e5ff8fbb-8afd-4b9c-8110-71997e046f75" containerName="cilium-agent" Sep 12 17:49:44.157910 systemd-logind[1557]: Removed session 28. Sep 12 17:49:44.160967 systemd[1]: Started sshd@28-10.0.0.100:22-10.0.0.1:34134.service - OpenSSH per-connection server daemon (10.0.0.1:34134). Sep 12 17:49:44.177872 systemd[1]: Created slice kubepods-burstable-pode8e9bcb5_14a6_4759_9035_ddc998f63c51.slice - libcontainer container kubepods-burstable-pode8e9bcb5_14a6_4759_9035_ddc998f63c51.slice. Sep 12 17:49:44.218771 kubelet[2773]: I0912 17:49:44.218704 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8e9bcb5-14a6-4759-9035-ddc998f63c51-cilium-cgroup\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.218771 kubelet[2773]: I0912 17:49:44.218760 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8e9bcb5-14a6-4759-9035-ddc998f63c51-host-proc-sys-kernel\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.218771 kubelet[2773]: I0912 17:49:44.218776 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8e9bcb5-14a6-4759-9035-ddc998f63c51-hubble-tls\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.218981 kubelet[2773]: I0912 17:49:44.218792 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8e9bcb5-14a6-4759-9035-ddc998f63c51-bpf-maps\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.218981 kubelet[2773]: I0912 17:49:44.218808 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8e9bcb5-14a6-4759-9035-ddc998f63c51-xtables-lock\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.218981 kubelet[2773]: I0912 17:49:44.218826 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgkct\" (UniqueName: \"kubernetes.io/projected/e8e9bcb5-14a6-4759-9035-ddc998f63c51-kube-api-access-fgkct\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.218981 kubelet[2773]: I0912 17:49:44.218840 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8e9bcb5-14a6-4759-9035-ddc998f63c51-etc-cni-netd\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.218981 kubelet[2773]: I0912 17:49:44.218854 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8e9bcb5-14a6-4759-9035-ddc998f63c51-host-proc-sys-net\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.218981 kubelet[2773]: I0912 17:49:44.218883 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8e9bcb5-14a6-4759-9035-ddc998f63c51-cilium-run\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.219170 kubelet[2773]: I0912 17:49:44.218902 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8e9bcb5-14a6-4759-9035-ddc998f63c51-lib-modules\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.219170 kubelet[2773]: I0912 17:49:44.218916 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8e9bcb5-14a6-4759-9035-ddc998f63c51-clustermesh-secrets\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.219170 kubelet[2773]: I0912 17:49:44.218934 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8e9bcb5-14a6-4759-9035-ddc998f63c51-cilium-ipsec-secrets\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.219170 kubelet[2773]: I0912 17:49:44.218947 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8e9bcb5-14a6-4759-9035-ddc998f63c51-hostproc\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.219170 kubelet[2773]: I0912 17:49:44.218963 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8e9bcb5-14a6-4759-9035-ddc998f63c51-cni-path\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.219170 kubelet[2773]: I0912 17:49:44.218984 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8e9bcb5-14a6-4759-9035-ddc998f63c51-cilium-config-path\") pod \"cilium-g7bg4\" (UID: \"e8e9bcb5-14a6-4759-9035-ddc998f63c51\") " pod="kube-system/cilium-g7bg4" Sep 12 17:49:44.233130 sshd[4567]: Accepted publickey for core from 10.0.0.1 port 34134 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:44.235128 sshd-session[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:44.239552 systemd-logind[1557]: New session 29 of user core. Sep 12 17:49:44.249539 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 17:49:44.301261 sshd[4570]: Connection closed by 10.0.0.1 port 34134 Sep 12 17:49:44.301621 sshd-session[4567]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:44.314464 systemd[1]: sshd@28-10.0.0.100:22-10.0.0.1:34134.service: Deactivated successfully. Sep 12 17:49:44.316316 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 17:49:44.317154 systemd-logind[1557]: Session 29 logged out. Waiting for processes to exit. Sep 12 17:49:44.320772 systemd[1]: Started sshd@29-10.0.0.100:22-10.0.0.1:34140.service - OpenSSH per-connection server daemon (10.0.0.1:34140). Sep 12 17:49:44.326617 systemd-logind[1557]: Removed session 29. Sep 12 17:49:44.381536 sshd[4578]: Accepted publickey for core from 10.0.0.1 port 34140 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:49:44.382926 sshd-session[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:44.387464 systemd-logind[1557]: New session 30 of user core. Sep 12 17:49:44.395551 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 12 17:49:44.537251 kubelet[2773]: I0912 17:49:44.537201 2773 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5ff8fbb-8afd-4b9c-8110-71997e046f75" path="/var/lib/kubelet/pods/e5ff8fbb-8afd-4b9c-8110-71997e046f75/volumes" Sep 12 17:49:44.788364 kubelet[2773]: E0912 17:49:44.788145 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:44.793409 containerd[1572]: time="2025-09-12T17:49:44.792671420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g7bg4,Uid:e8e9bcb5-14a6-4759-9035-ddc998f63c51,Namespace:kube-system,Attempt:0,}" Sep 12 17:49:44.867347 containerd[1572]: time="2025-09-12T17:49:44.867278211Z" level=info msg="connecting to shim 694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7" address="unix:///run/containerd/s/c32a48376c34daaddf1588df996fabe11a199af54b55dfa083a1536d2f571b83" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:44.901729 systemd[1]: Started cri-containerd-694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7.scope - libcontainer container 694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7. Sep 12 17:49:45.003128 containerd[1572]: time="2025-09-12T17:49:45.003031358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g7bg4,Uid:e8e9bcb5-14a6-4759-9035-ddc998f63c51,Namespace:kube-system,Attempt:0,} returns sandbox id \"694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7\"" Sep 12 17:49:45.004111 kubelet[2773]: E0912 17:49:45.004069 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:45.006570 containerd[1572]: time="2025-09-12T17:49:45.006520888Z" level=info msg="CreateContainer within sandbox \"694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:49:45.215326 containerd[1572]: time="2025-09-12T17:49:45.215247573Z" level=info msg="Container e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:45.238653 containerd[1572]: time="2025-09-12T17:49:45.238589173Z" level=info msg="CreateContainer within sandbox \"694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058\"" Sep 12 17:49:45.239733 containerd[1572]: time="2025-09-12T17:49:45.239680914Z" level=info msg="StartContainer for \"e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058\"" Sep 12 17:49:45.241180 containerd[1572]: time="2025-09-12T17:49:45.241142654Z" level=info msg="connecting to shim e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058" address="unix:///run/containerd/s/c32a48376c34daaddf1588df996fabe11a199af54b55dfa083a1536d2f571b83" protocol=ttrpc version=3 Sep 12 17:49:45.267552 systemd[1]: Started cri-containerd-e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058.scope - libcontainer container e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058. Sep 12 17:49:45.302320 containerd[1572]: time="2025-09-12T17:49:45.302277165Z" level=info msg="StartContainer for \"e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058\" returns successfully" Sep 12 17:49:45.314115 systemd[1]: cri-containerd-e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058.scope: Deactivated successfully. Sep 12 17:49:45.315661 containerd[1572]: time="2025-09-12T17:49:45.315621466Z" level=info msg="received exit event container_id:\"e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058\" id:\"e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058\" pid:4651 exited_at:{seconds:1757699385 nanos:315226850}" Sep 12 17:49:45.315741 containerd[1572]: time="2025-09-12T17:49:45.315718509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058\" id:\"e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058\" pid:4651 exited_at:{seconds:1757699385 nanos:315226850}" Sep 12 17:49:45.344840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e66cc8c60da3d9ff7957108d702bc3a5a3f4299510700843243e956fe842d058-rootfs.mount: Deactivated successfully. Sep 12 17:49:45.969427 kubelet[2773]: E0912 17:49:45.969373 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:45.971079 containerd[1572]: time="2025-09-12T17:49:45.971027261Z" level=info msg="CreateContainer within sandbox \"694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:49:45.999498 containerd[1572]: time="2025-09-12T17:49:45.999421003Z" level=info msg="Container 0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:46.010885 containerd[1572]: time="2025-09-12T17:49:46.010823002Z" level=info msg="CreateContainer within sandbox \"694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd\"" Sep 12 17:49:46.011584 containerd[1572]: time="2025-09-12T17:49:46.011540868Z" level=info msg="StartContainer for \"0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd\"" Sep 12 17:49:46.012530 containerd[1572]: time="2025-09-12T17:49:46.012506792Z" level=info msg="connecting to shim 0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd" address="unix:///run/containerd/s/c32a48376c34daaddf1588df996fabe11a199af54b55dfa083a1536d2f571b83" protocol=ttrpc version=3 Sep 12 17:49:46.038697 systemd[1]: Started cri-containerd-0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd.scope - libcontainer container 0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd. Sep 12 17:49:46.072662 containerd[1572]: time="2025-09-12T17:49:46.072607889Z" level=info msg="StartContainer for \"0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd\" returns successfully" Sep 12 17:49:46.078999 systemd[1]: cri-containerd-0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd.scope: Deactivated successfully. Sep 12 17:49:46.080028 containerd[1572]: time="2025-09-12T17:49:46.079985580Z" level=info msg="received exit event container_id:\"0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd\" id:\"0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd\" pid:4696 exited_at:{seconds:1757699386 nanos:79138680}" Sep 12 17:49:46.080150 containerd[1572]: time="2025-09-12T17:49:46.080002341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd\" id:\"0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd\" pid:4696 exited_at:{seconds:1757699386 nanos:79138680}" Sep 12 17:49:46.326445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c82e62ffa4188a129e78c730b9ef66e6ea75275babe075c539c58b7d5e1f5bd-rootfs.mount: Deactivated successfully. Sep 12 17:49:46.458959 update_engine[1563]: I20250912 17:49:46.458808 1563 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:49:46.459528 update_engine[1563]: I20250912 17:49:46.459024 1563 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:49:46.459528 update_engine[1563]: I20250912 17:49:46.459507 1563 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:49:46.476944 update_engine[1563]: E20250912 17:49:46.476851 1563 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:49:46.477137 update_engine[1563]: I20250912 17:49:46.476966 1563 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 12 17:49:46.587512 kubelet[2773]: E0912 17:49:46.587293 2773 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:49:46.974302 kubelet[2773]: E0912 17:49:46.974169 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:46.975893 containerd[1572]: time="2025-09-12T17:49:46.975861618Z" level=info msg="CreateContainer within sandbox \"694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:49:47.264229 containerd[1572]: time="2025-09-12T17:49:47.264074258Z" level=info msg="Container 282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:47.268533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2989572680.mount: Deactivated successfully. Sep 12 17:49:47.303416 containerd[1572]: time="2025-09-12T17:49:47.303347327Z" level=info msg="CreateContainer within sandbox \"694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c\"" Sep 12 17:49:47.304426 containerd[1572]: time="2025-09-12T17:49:47.304015970Z" level=info msg="StartContainer for \"282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c\"" Sep 12 17:49:47.305834 containerd[1572]: time="2025-09-12T17:49:47.305799767Z" level=info msg="connecting to shim 282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c" address="unix:///run/containerd/s/c32a48376c34daaddf1588df996fabe11a199af54b55dfa083a1536d2f571b83" protocol=ttrpc version=3 Sep 12 17:49:47.342696 systemd[1]: Started cri-containerd-282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c.scope - libcontainer container 282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c. Sep 12 17:49:47.393080 containerd[1572]: time="2025-09-12T17:49:47.393019411Z" level=info msg="StartContainer for \"282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c\" returns successfully" Sep 12 17:49:47.393829 systemd[1]: cri-containerd-282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c.scope: Deactivated successfully. Sep 12 17:49:47.396395 containerd[1572]: time="2025-09-12T17:49:47.396322287Z" level=info msg="received exit event container_id:\"282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c\" id:\"282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c\" pid:4742 exited_at:{seconds:1757699387 nanos:396081653}" Sep 12 17:49:47.396549 containerd[1572]: time="2025-09-12T17:49:47.396443206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c\" id:\"282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c\" pid:4742 exited_at:{seconds:1757699387 nanos:396081653}" Sep 12 17:49:47.420796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-282bd507288e04c7e3c594c454d541e38790cae7bc64d48bc5b14a1c3341554c-rootfs.mount: Deactivated successfully. Sep 12 17:49:47.534419 kubelet[2773]: E0912 17:49:47.534218 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-xpdrs" podUID="69995e6d-5308-42a1-b40a-99caa93aa87e" Sep 12 17:49:47.979044 kubelet[2773]: E0912 17:49:47.978989 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:47.980794 containerd[1572]: time="2025-09-12T17:49:47.980757511Z" level=info msg="CreateContainer within sandbox \"694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:49:48.069280 containerd[1572]: time="2025-09-12T17:49:48.069209196Z" level=info msg="Container 1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:48.080545 containerd[1572]: time="2025-09-12T17:49:48.080483320Z" level=info msg="CreateContainer within sandbox \"694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08\"" Sep 12 17:49:48.081142 containerd[1572]: time="2025-09-12T17:49:48.081112137Z" level=info msg="StartContainer for \"1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08\"" Sep 12 17:49:48.082183 containerd[1572]: time="2025-09-12T17:49:48.082151620Z" level=info msg="connecting to shim 1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08" address="unix:///run/containerd/s/c32a48376c34daaddf1588df996fabe11a199af54b55dfa083a1536d2f571b83" protocol=ttrpc version=3 Sep 12 17:49:48.105591 systemd[1]: Started cri-containerd-1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08.scope - libcontainer container 1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08. Sep 12 17:49:48.133768 systemd[1]: cri-containerd-1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08.scope: Deactivated successfully. Sep 12 17:49:48.134558 containerd[1572]: time="2025-09-12T17:49:48.134508361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08\" id:\"1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08\" pid:4780 exited_at:{seconds:1757699388 nanos:133939997}" Sep 12 17:49:48.136199 containerd[1572]: time="2025-09-12T17:49:48.136153817Z" level=info msg="received exit event container_id:\"1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08\" id:\"1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08\" pid:4780 exited_at:{seconds:1757699388 nanos:133939997}" Sep 12 17:49:48.144053 containerd[1572]: time="2025-09-12T17:49:48.143959071Z" level=info msg="StartContainer for \"1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08\" returns successfully" Sep 12 17:49:48.156962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e3acf66faaa81f3f1648d32ea49637ac2063890f5f35d02ca41b8c55090eb08-rootfs.mount: Deactivated successfully. Sep 12 17:49:48.985960 kubelet[2773]: E0912 17:49:48.985914 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:48.989782 containerd[1572]: time="2025-09-12T17:49:48.989474729Z" level=info msg="CreateContainer within sandbox \"694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:49:49.347836 containerd[1572]: time="2025-09-12T17:49:49.347512752Z" level=info msg="Container bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:49.359977 containerd[1572]: time="2025-09-12T17:49:49.359918258Z" level=info msg="CreateContainer within sandbox \"694e7a064a4413149e3870d82e585d187e9a6441619381359331c086b76590e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c\"" Sep 12 17:49:49.360523 containerd[1572]: time="2025-09-12T17:49:49.360499877Z" level=info msg="StartContainer for \"bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c\"" Sep 12 17:49:49.361604 containerd[1572]: time="2025-09-12T17:49:49.361580276Z" level=info msg="connecting to shim bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c" address="unix:///run/containerd/s/c32a48376c34daaddf1588df996fabe11a199af54b55dfa083a1536d2f571b83" protocol=ttrpc version=3 Sep 12 17:49:49.400696 systemd[1]: Started cri-containerd-bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c.scope - libcontainer container bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c. Sep 12 17:49:49.445486 containerd[1572]: time="2025-09-12T17:49:49.445366032Z" level=info msg="StartContainer for \"bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c\" returns successfully" Sep 12 17:49:49.528281 containerd[1572]: time="2025-09-12T17:49:49.528223806Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c\" id:\"0a11ce066694105155c7ad54a94fee0181d91b70febfc9431ef8e9fac4fe0a1b\" pid:4846 exited_at:{seconds:1757699389 nanos:527665432}" Sep 12 17:49:49.534686 kubelet[2773]: E0912 17:49:49.534602 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-xpdrs" podUID="69995e6d-5308-42a1-b40a-99caa93aa87e" Sep 12 17:49:49.961428 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 12 17:49:49.992240 kubelet[2773]: E0912 17:49:49.992025 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:50.031178 kubelet[2773]: I0912 17:49:50.030979 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g7bg4" podStartSLOduration=6.030947464 podStartE2EDuration="6.030947464s" podCreationTimestamp="2025-09-12 17:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:49:50.030281057 +0000 UTC m=+113.578718780" watchObservedRunningTime="2025-09-12 17:49:50.030947464 +0000 UTC m=+113.579385187" Sep 12 17:49:50.536209 kubelet[2773]: I0912 17:49:50.536133 2773 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:49:50Z","lastTransitionTime":"2025-09-12T17:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:49:50.863332 containerd[1572]: time="2025-09-12T17:49:50.863279060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c\" id:\"f590f2440d6b1ba0b505b233e1bbd8e679a4f233ceec2235045b2997a43df90b\" pid:4923 exit_status:1 exited_at:{seconds:1757699390 nanos:862720646}" Sep 12 17:49:50.993713 kubelet[2773]: E0912 17:49:50.993643 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:51.534860 kubelet[2773]: E0912 17:49:51.534772 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-xpdrs" podUID="69995e6d-5308-42a1-b40a-99caa93aa87e" Sep 12 17:49:53.171292 containerd[1572]: time="2025-09-12T17:49:53.171222363Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c\" id:\"4b655390a63a51377a48b73e42e897772b7a2bef8212fbd7497283143c916764\" pid:5323 exit_status:1 exited_at:{seconds:1757699393 nanos:170767916}" Sep 12 17:49:53.221715 systemd-networkd[1489]: lxc_health: Link UP Sep 12 17:49:53.224425 systemd-networkd[1489]: lxc_health: Gained carrier Sep 12 17:49:53.534929 kubelet[2773]: E0912 17:49:53.534611 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:54.368630 systemd-networkd[1489]: lxc_health: Gained IPv6LL Sep 12 17:49:54.790585 kubelet[2773]: E0912 17:49:54.790420 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:55.001281 kubelet[2773]: E0912 17:49:55.001235 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:55.426097 containerd[1572]: time="2025-09-12T17:49:55.425368003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c\" id:\"d670c959a659aa713b46915b939747983d7dfaddf9615f527f6f9f9dc06223a5\" pid:5408 exited_at:{seconds:1757699395 nanos:424571271}" Sep 12 17:49:56.004014 kubelet[2773]: E0912 17:49:56.003964 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:49:56.465630 update_engine[1563]: I20250912 17:49:56.465483 1563 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:49:56.465630 update_engine[1563]: I20250912 17:49:56.465608 1563 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:49:56.466237 update_engine[1563]: I20250912 17:49:56.466149 1563 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:49:56.477672 update_engine[1563]: E20250912 17:49:56.477583 1563 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:49:56.477672 update_engine[1563]: I20250912 17:49:56.477671 1563 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 12 17:49:56.539354 containerd[1572]: time="2025-09-12T17:49:56.539285424Z" level=info msg="StopPodSandbox for \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\"" Sep 12 17:49:56.539800 containerd[1572]: time="2025-09-12T17:49:56.539466506Z" level=info msg="TearDown network for sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" successfully" Sep 12 17:49:56.539800 containerd[1572]: time="2025-09-12T17:49:56.539483789Z" level=info msg="StopPodSandbox for \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" returns successfully" Sep 12 17:49:56.540339 containerd[1572]: time="2025-09-12T17:49:56.540299377Z" level=info msg="RemovePodSandbox for \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\"" Sep 12 17:49:56.540339 containerd[1572]: time="2025-09-12T17:49:56.540331718Z" level=info msg="Forcibly stopping sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\"" Sep 12 17:49:56.540598 containerd[1572]: time="2025-09-12T17:49:56.540424152Z" level=info msg="TearDown network for sandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" successfully" Sep 12 17:49:56.542746 containerd[1572]: time="2025-09-12T17:49:56.542722868Z" level=info msg="Ensure that sandbox a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579 in task-service has been cleanup successfully" Sep 12 17:49:56.582793 containerd[1572]: time="2025-09-12T17:49:56.582712755Z" level=info msg="RemovePodSandbox \"a0c2f23c11bf9b2d81fd98a64bd1c1b6f6f6bb87a9f83cabcb85b745493e9579\" returns successfully" Sep 12 17:49:56.584328 containerd[1572]: time="2025-09-12T17:49:56.584299688Z" level=info msg="StopPodSandbox for \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\"" Sep 12 17:49:56.584444 containerd[1572]: time="2025-09-12T17:49:56.584422721Z" level=info msg="TearDown network for sandbox \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\" successfully" Sep 12 17:49:56.584444 containerd[1572]: time="2025-09-12T17:49:56.584438811Z" level=info msg="StopPodSandbox for \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\" returns successfully" Sep 12 17:49:56.584730 containerd[1572]: time="2025-09-12T17:49:56.584674736Z" level=info msg="RemovePodSandbox for \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\"" Sep 12 17:49:56.584730 containerd[1572]: time="2025-09-12T17:49:56.584705514Z" level=info msg="Forcibly stopping sandbox \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\"" Sep 12 17:49:56.584837 containerd[1572]: time="2025-09-12T17:49:56.584777350Z" level=info msg="TearDown network for sandbox \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\" successfully" Sep 12 17:49:56.587080 containerd[1572]: time="2025-09-12T17:49:56.587054084Z" level=info msg="Ensure that sandbox fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4 in task-service has been cleanup successfully" Sep 12 17:49:56.592580 containerd[1572]: time="2025-09-12T17:49:56.592534121Z" level=info msg="RemovePodSandbox \"fe6f8dc0d1b9aec7362a035847275f15000c13af17b07b21e08e50de1a25a0d4\" returns successfully" Sep 12 17:49:57.522960 containerd[1572]: time="2025-09-12T17:49:57.522916852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c\" id:\"55bc73bc48ce8565dbcca1b609f1ebdc9b8507a43a456b7959c6aad7c4f1efb5\" pid:5444 exited_at:{seconds:1757699397 nanos:522567754}" Sep 12 17:49:59.632134 containerd[1572]: time="2025-09-12T17:49:59.631968987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc72b2b6c4cea757d9709a52480ad7a75aa281eda98060917cb08dd4b824eb3c\" id:\"f78f57123c8d3a00e01ff8e555f4d02f50c59811053a6e7109e78d147d3eb9d5\" pid:5469 exited_at:{seconds:1757699399 nanos:631374647}" Sep 12 17:49:59.644247 sshd[4584]: Connection closed by 10.0.0.1 port 34140 Sep 12 17:49:59.644946 sshd-session[4578]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:59.650186 systemd[1]: sshd@29-10.0.0.100:22-10.0.0.1:34140.service: Deactivated successfully. Sep 12 17:49:59.652437 systemd[1]: session-30.scope: Deactivated successfully. Sep 12 17:49:59.653313 systemd-logind[1557]: Session 30 logged out. Waiting for processes to exit. Sep 12 17:49:59.654760 systemd-logind[1557]: Removed session 30. Sep 12 17:50:01.535409 kubelet[2773]: E0912 17:50:01.535250 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"