Sep 4 00:03:14.842155 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 3 22:05:39 -00 2025 Sep 4 00:03:14.842186 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:03:14.842197 kernel: BIOS-provided physical RAM map: Sep 4 00:03:14.842205 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 00:03:14.842213 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 00:03:14.842222 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 00:03:14.842230 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 4 00:03:14.842240 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 4 00:03:14.842250 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 4 00:03:14.842257 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 4 00:03:14.842263 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 00:03:14.842270 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 00:03:14.842276 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 00:03:14.842283 kernel: NX (Execute Disable) protection: active Sep 4 00:03:14.842293 kernel: APIC: Static calls initialized Sep 4 00:03:14.842300 kernel: SMBIOS 2.8 present. Sep 4 00:03:14.842310 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 4 00:03:14.842317 kernel: DMI: Memory slots populated: 1/1 Sep 4 00:03:14.842324 kernel: Hypervisor detected: KVM Sep 4 00:03:14.842332 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 00:03:14.842339 kernel: kvm-clock: using sched offset of 6148003596 cycles Sep 4 00:03:14.842347 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 00:03:14.842354 kernel: tsc: Detected 2794.750 MHz processor Sep 4 00:03:14.842364 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 00:03:14.842371 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 00:03:14.842379 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 4 00:03:14.842386 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 00:03:14.842394 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 00:03:14.842401 kernel: Using GB pages for direct mapping Sep 4 00:03:14.842408 kernel: ACPI: Early table checksum verification disabled Sep 4 00:03:14.842417 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 4 00:03:14.842426 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:03:14.842438 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:03:14.842447 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:03:14.842456 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 4 00:03:14.842465 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:03:14.842474 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:03:14.842484 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:03:14.842493 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:03:14.842502 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 4 00:03:14.842518 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 4 00:03:14.842527 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 4 00:03:14.842536 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 4 00:03:14.842546 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 4 00:03:14.842555 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 4 00:03:14.842563 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 4 00:03:14.842572 kernel: No NUMA configuration found Sep 4 00:03:14.842579 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 4 00:03:14.842587 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 4 00:03:14.842594 kernel: Zone ranges: Sep 4 00:03:14.842602 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 00:03:14.842609 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 4 00:03:14.842616 kernel: Normal empty Sep 4 00:03:14.842624 kernel: Device empty Sep 4 00:03:14.842631 kernel: Movable zone start for each node Sep 4 00:03:14.842641 kernel: Early memory node ranges Sep 4 00:03:14.842648 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 00:03:14.842655 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 4 00:03:14.842663 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 4 00:03:14.842670 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 00:03:14.842677 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 00:03:14.842685 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 4 00:03:14.842692 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 00:03:14.842702 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 00:03:14.842710 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 00:03:14.842720 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 00:03:14.842727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 00:03:14.842736 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 00:03:14.842744 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 00:03:14.842751 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 00:03:14.842759 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 00:03:14.842766 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 00:03:14.842773 kernel: TSC deadline timer available Sep 4 00:03:14.842781 kernel: CPU topo: Max. logical packages: 1 Sep 4 00:03:14.842790 kernel: CPU topo: Max. logical dies: 1 Sep 4 00:03:14.842798 kernel: CPU topo: Max. dies per package: 1 Sep 4 00:03:14.842805 kernel: CPU topo: Max. threads per core: 1 Sep 4 00:03:14.842812 kernel: CPU topo: Num. cores per package: 4 Sep 4 00:03:14.842819 kernel: CPU topo: Num. threads per package: 4 Sep 4 00:03:14.842827 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 4 00:03:14.842834 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 00:03:14.842842 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 00:03:14.842865 kernel: kvm-guest: setup PV sched yield Sep 4 00:03:14.842872 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 4 00:03:14.842883 kernel: Booting paravirtualized kernel on KVM Sep 4 00:03:14.842890 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 00:03:14.842898 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 00:03:14.842919 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 4 00:03:14.842926 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 4 00:03:14.842944 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 00:03:14.842952 kernel: kvm-guest: PV spinlocks enabled Sep 4 00:03:14.842976 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 00:03:14.842988 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:03:14.843025 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 00:03:14.843033 kernel: random: crng init done Sep 4 00:03:14.843040 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 00:03:14.843048 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 00:03:14.843056 kernel: Fallback order for Node 0: 0 Sep 4 00:03:14.843063 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 4 00:03:14.843079 kernel: Policy zone: DMA32 Sep 4 00:03:14.843088 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 00:03:14.843099 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 00:03:14.843107 kernel: ftrace: allocating 40099 entries in 157 pages Sep 4 00:03:14.843114 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 00:03:14.843121 kernel: Dynamic Preempt: voluntary Sep 4 00:03:14.843129 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 00:03:14.843137 kernel: rcu: RCU event tracing is enabled. Sep 4 00:03:14.843145 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 00:03:14.843152 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 00:03:14.843162 kernel: Rude variant of Tasks RCU enabled. Sep 4 00:03:14.843172 kernel: Tracing variant of Tasks RCU enabled. Sep 4 00:03:14.843180 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 00:03:14.843187 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 00:03:14.843195 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 00:03:14.843202 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 00:03:14.843210 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 00:03:14.843217 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 00:03:14.843225 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 00:03:14.843247 kernel: Console: colour VGA+ 80x25 Sep 4 00:03:14.843257 kernel: printk: legacy console [ttyS0] enabled Sep 4 00:03:14.843267 kernel: ACPI: Core revision 20240827 Sep 4 00:03:14.843277 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 00:03:14.843290 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 00:03:14.843300 kernel: x2apic enabled Sep 4 00:03:14.843314 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 00:03:14.843325 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 00:03:14.843334 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 00:03:14.843344 kernel: kvm-guest: setup PV IPIs Sep 4 00:03:14.843352 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 00:03:14.843360 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 4 00:03:14.843368 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 4 00:03:14.843376 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 00:03:14.843383 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 00:03:14.843391 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 00:03:14.843399 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 00:03:14.843409 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 00:03:14.843417 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 00:03:14.843424 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 00:03:14.843432 kernel: active return thunk: retbleed_return_thunk Sep 4 00:03:14.843440 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 00:03:14.843448 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 00:03:14.843456 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 00:03:14.843464 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 00:03:14.843472 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 00:03:14.843482 kernel: active return thunk: srso_return_thunk Sep 4 00:03:14.843490 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 00:03:14.843498 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 00:03:14.843506 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 00:03:14.843514 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 00:03:14.843524 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 00:03:14.843533 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 00:03:14.843543 kernel: Freeing SMP alternatives memory: 32K Sep 4 00:03:14.843555 kernel: pid_max: default: 32768 minimum: 301 Sep 4 00:03:14.843565 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 00:03:14.843575 kernel: landlock: Up and running. Sep 4 00:03:14.843584 kernel: SELinux: Initializing. Sep 4 00:03:14.843598 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 00:03:14.843608 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 00:03:14.843618 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 00:03:14.843628 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 00:03:14.843637 kernel: ... version: 0 Sep 4 00:03:14.843650 kernel: ... bit width: 48 Sep 4 00:03:14.843660 kernel: ... generic registers: 6 Sep 4 00:03:14.843669 kernel: ... value mask: 0000ffffffffffff Sep 4 00:03:14.843679 kernel: ... max period: 00007fffffffffff Sep 4 00:03:14.843689 kernel: ... fixed-purpose events: 0 Sep 4 00:03:14.843698 kernel: ... event mask: 000000000000003f Sep 4 00:03:14.843708 kernel: signal: max sigframe size: 1776 Sep 4 00:03:14.843718 kernel: rcu: Hierarchical SRCU implementation. Sep 4 00:03:14.843726 kernel: rcu: Max phase no-delay instances is 400. Sep 4 00:03:14.843734 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 00:03:14.843744 kernel: smp: Bringing up secondary CPUs ... Sep 4 00:03:14.843752 kernel: smpboot: x86: Booting SMP configuration: Sep 4 00:03:14.843759 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 00:03:14.843767 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 00:03:14.843775 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 4 00:03:14.843783 kernel: Memory: 2430968K/2571752K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 134856K reserved, 0K cma-reserved) Sep 4 00:03:14.843791 kernel: devtmpfs: initialized Sep 4 00:03:14.843799 kernel: x86/mm: Memory block size: 128MB Sep 4 00:03:14.843807 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 00:03:14.843817 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 00:03:14.843827 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 00:03:14.843835 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 00:03:14.843842 kernel: audit: initializing netlink subsys (disabled) Sep 4 00:03:14.843897 kernel: audit: type=2000 audit(1756944190.776:1): state=initialized audit_enabled=0 res=1 Sep 4 00:03:14.843905 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 00:03:14.843913 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 00:03:14.843921 kernel: cpuidle: using governor menu Sep 4 00:03:14.843929 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 00:03:14.843940 kernel: dca service started, version 1.12.1 Sep 4 00:03:14.843948 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 4 00:03:14.843956 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 4 00:03:14.843963 kernel: PCI: Using configuration type 1 for base access Sep 4 00:03:14.843971 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 00:03:14.843979 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 00:03:14.843987 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 00:03:14.843995 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 00:03:14.844005 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 00:03:14.844020 kernel: ACPI: Added _OSI(Module Device) Sep 4 00:03:14.844027 kernel: ACPI: Added _OSI(Processor Device) Sep 4 00:03:14.844035 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 00:03:14.844043 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 00:03:14.844051 kernel: ACPI: Interpreter enabled Sep 4 00:03:14.844059 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 00:03:14.844067 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 00:03:14.844075 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 00:03:14.844083 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 00:03:14.844093 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 00:03:14.844101 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 00:03:14.844325 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 00:03:14.844466 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 00:03:14.844597 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 00:03:14.844608 kernel: PCI host bridge to bus 0000:00 Sep 4 00:03:14.844746 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 00:03:14.844884 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 00:03:14.845004 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 00:03:14.845132 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 4 00:03:14.845262 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 4 00:03:14.845394 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 4 00:03:14.845919 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 00:03:14.846116 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 4 00:03:14.846274 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 4 00:03:14.846398 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 4 00:03:14.846528 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 4 00:03:14.846680 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 4 00:03:14.846829 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 00:03:14.846995 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 00:03:14.847137 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 4 00:03:14.847260 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 4 00:03:14.847383 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 4 00:03:14.847526 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 4 00:03:14.847651 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 4 00:03:14.847774 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 4 00:03:14.847916 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 4 00:03:14.848122 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 4 00:03:14.848249 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 4 00:03:14.848372 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 4 00:03:14.848532 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 4 00:03:14.848679 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 4 00:03:14.848823 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 4 00:03:14.849004 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 00:03:14.849191 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 4 00:03:14.849316 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 4 00:03:14.849437 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 4 00:03:14.849598 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 4 00:03:14.849726 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 4 00:03:14.849737 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 00:03:14.849751 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 00:03:14.849759 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 00:03:14.849767 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 00:03:14.849775 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 00:03:14.849783 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 00:03:14.849790 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 00:03:14.849798 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 00:03:14.849806 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 00:03:14.849814 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 00:03:14.849824 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 00:03:14.849831 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 00:03:14.849839 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 00:03:14.849867 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 00:03:14.849876 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 00:03:14.849883 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 00:03:14.849891 kernel: iommu: Default domain type: Translated Sep 4 00:03:14.849899 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 00:03:14.849907 kernel: PCI: Using ACPI for IRQ routing Sep 4 00:03:14.849917 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 00:03:14.849925 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 00:03:14.849933 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 4 00:03:14.850071 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 00:03:14.850194 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 00:03:14.850315 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 00:03:14.850325 kernel: vgaarb: loaded Sep 4 00:03:14.850333 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 00:03:14.850345 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 00:03:14.850353 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 00:03:14.850360 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 00:03:14.850369 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 00:03:14.850376 kernel: pnp: PnP ACPI init Sep 4 00:03:14.850536 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 4 00:03:14.850548 kernel: pnp: PnP ACPI: found 6 devices Sep 4 00:03:14.850556 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 00:03:14.850568 kernel: NET: Registered PF_INET protocol family Sep 4 00:03:14.850576 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 00:03:14.850584 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 00:03:14.850592 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 00:03:14.850600 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 00:03:14.850607 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 00:03:14.850615 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 00:03:14.850623 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 00:03:14.850631 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 00:03:14.850641 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 00:03:14.850649 kernel: NET: Registered PF_XDP protocol family Sep 4 00:03:14.850762 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 00:03:14.850890 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 00:03:14.851002 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 00:03:14.851128 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 4 00:03:14.851240 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 4 00:03:14.851350 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 4 00:03:14.851365 kernel: PCI: CLS 0 bytes, default 64 Sep 4 00:03:14.851374 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 4 00:03:14.851382 kernel: Initialise system trusted keyrings Sep 4 00:03:14.851390 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 00:03:14.851397 kernel: Key type asymmetric registered Sep 4 00:03:14.851405 kernel: Asymmetric key parser 'x509' registered Sep 4 00:03:14.851413 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 00:03:14.851421 kernel: io scheduler mq-deadline registered Sep 4 00:03:14.851429 kernel: io scheduler kyber registered Sep 4 00:03:14.851436 kernel: io scheduler bfq registered Sep 4 00:03:14.851446 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 00:03:14.851455 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 00:03:14.851463 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 00:03:14.851471 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 00:03:14.851479 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 00:03:14.851487 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 00:03:14.851495 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 00:03:14.851503 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 00:03:14.851510 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 00:03:14.851652 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 00:03:14.851664 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 00:03:14.851778 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 00:03:14.851911 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T00:03:14 UTC (1756944194) Sep 4 00:03:14.852038 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 00:03:14.852049 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 00:03:14.852059 kernel: NET: Registered PF_INET6 protocol family Sep 4 00:03:14.852071 kernel: Segment Routing with IPv6 Sep 4 00:03:14.852078 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 00:03:14.852086 kernel: NET: Registered PF_PACKET protocol family Sep 4 00:03:14.852094 kernel: Key type dns_resolver registered Sep 4 00:03:14.852102 kernel: IPI shorthand broadcast: enabled Sep 4 00:03:14.852109 kernel: sched_clock: Marking stable (4258006230, 119867206)->(4403027986, -25154550) Sep 4 00:03:14.852117 kernel: registered taskstats version 1 Sep 4 00:03:14.852125 kernel: Loading compiled-in X.509 certificates Sep 4 00:03:14.852133 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 247a8159a15e16f8eb89737aa66cd9cf9bbb3c10' Sep 4 00:03:14.852143 kernel: Demotion targets for Node 0: null Sep 4 00:03:14.852151 kernel: Key type .fscrypt registered Sep 4 00:03:14.852159 kernel: Key type fscrypt-provisioning registered Sep 4 00:03:14.852166 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 00:03:14.852174 kernel: ima: Allocated hash algorithm: sha1 Sep 4 00:03:14.852182 kernel: ima: No architecture policies found Sep 4 00:03:14.852190 kernel: clk: Disabling unused clocks Sep 4 00:03:14.852198 kernel: Warning: unable to open an initial console. Sep 4 00:03:14.852206 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 4 00:03:14.852216 kernel: Write protecting the kernel read-only data: 24576k Sep 4 00:03:14.852224 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 4 00:03:14.852232 kernel: Run /init as init process Sep 4 00:03:14.852239 kernel: with arguments: Sep 4 00:03:14.852247 kernel: /init Sep 4 00:03:14.852254 kernel: with environment: Sep 4 00:03:14.852262 kernel: HOME=/ Sep 4 00:03:14.852269 kernel: TERM=linux Sep 4 00:03:14.852277 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 00:03:14.852289 systemd[1]: Successfully made /usr/ read-only. Sep 4 00:03:14.852310 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:03:14.852321 systemd[1]: Detected virtualization kvm. Sep 4 00:03:14.852330 systemd[1]: Detected architecture x86-64. Sep 4 00:03:14.852338 systemd[1]: Running in initrd. Sep 4 00:03:14.852348 systemd[1]: No hostname configured, using default hostname. Sep 4 00:03:14.852357 systemd[1]: Hostname set to . Sep 4 00:03:14.852365 systemd[1]: Initializing machine ID from VM UUID. Sep 4 00:03:14.852374 systemd[1]: Queued start job for default target initrd.target. Sep 4 00:03:14.852383 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:03:14.852391 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:03:14.852400 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 00:03:14.852409 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:03:14.852420 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 00:03:14.852429 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 00:03:14.852439 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 00:03:14.852448 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 00:03:14.852456 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:03:14.852465 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:03:14.852473 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:03:14.852484 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:03:14.852492 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:03:14.852501 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:03:14.852509 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:03:14.852518 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:03:14.852526 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 00:03:14.852537 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 00:03:14.852545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:03:14.852554 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:03:14.852564 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:03:14.852573 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:03:14.852581 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 00:03:14.852590 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:03:14.852601 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 00:03:14.852613 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 00:03:14.852621 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 00:03:14.852630 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:03:14.852638 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:03:14.852647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:03:14.852656 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 00:03:14.852667 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:03:14.852675 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 00:03:14.852704 systemd-journald[220]: Collecting audit messages is disabled. Sep 4 00:03:14.852736 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:03:14.852745 systemd-journald[220]: Journal started Sep 4 00:03:14.852764 systemd-journald[220]: Runtime Journal (/run/log/journal/a83164fc9b064d6f81d65e67321f6382) is 6M, max 48.6M, 42.5M free. Sep 4 00:03:14.858603 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:03:14.846773 systemd-modules-load[222]: Inserted module 'overlay' Sep 4 00:03:14.860984 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:03:14.879814 systemd-tmpfiles[234]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 00:03:14.912884 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 00:03:14.912923 kernel: Bridge firewalling registered Sep 4 00:03:14.880133 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:03:14.886128 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 4 00:03:14.913200 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:03:14.915789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:03:14.917882 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:03:14.922228 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 00:03:14.923070 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:03:14.924993 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:03:14.944223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:03:14.944543 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:03:14.948104 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:03:14.963050 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:03:14.964221 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 00:03:14.988831 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:03:15.001753 systemd-resolved[253]: Positive Trust Anchors: Sep 4 00:03:15.001772 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:03:15.001809 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:03:15.004647 systemd-resolved[253]: Defaulting to hostname 'linux'. Sep 4 00:03:15.010354 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:03:15.011562 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:03:15.112887 kernel: SCSI subsystem initialized Sep 4 00:03:15.122890 kernel: Loading iSCSI transport class v2.0-870. Sep 4 00:03:15.134876 kernel: iscsi: registered transport (tcp) Sep 4 00:03:15.160120 kernel: iscsi: registered transport (qla4xxx) Sep 4 00:03:15.160180 kernel: QLogic iSCSI HBA Driver Sep 4 00:03:15.187070 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:03:15.218360 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:03:15.222489 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:03:15.291011 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 00:03:15.294773 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 00:03:15.369923 kernel: raid6: avx2x4 gen() 22520 MB/s Sep 4 00:03:15.386906 kernel: raid6: avx2x2 gen() 30306 MB/s Sep 4 00:03:15.403940 kernel: raid6: avx2x1 gen() 25437 MB/s Sep 4 00:03:15.403974 kernel: raid6: using algorithm avx2x2 gen() 30306 MB/s Sep 4 00:03:15.421938 kernel: raid6: .... xor() 19665 MB/s, rmw enabled Sep 4 00:03:15.421978 kernel: raid6: using avx2x2 recovery algorithm Sep 4 00:03:15.442887 kernel: xor: automatically using best checksumming function avx Sep 4 00:03:15.612908 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 00:03:15.623204 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:03:15.625443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:03:15.657011 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 4 00:03:15.663272 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:03:15.666158 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 00:03:15.702902 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Sep 4 00:03:15.742176 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:03:15.747134 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:03:15.840172 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:03:15.845981 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 00:03:15.876898 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 00:03:15.881924 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 00:03:15.888625 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 00:03:15.888653 kernel: GPT:9289727 != 19775487 Sep 4 00:03:15.888666 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 00:03:15.888679 kernel: GPT:9289727 != 19775487 Sep 4 00:03:15.890421 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 00:03:15.890448 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 00:03:15.907871 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 00:03:15.907932 kernel: libata version 3.00 loaded. Sep 4 00:03:15.915878 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 4 00:03:15.918903 kernel: AES CTR mode by8 optimization enabled Sep 4 00:03:15.932566 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:03:15.932698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:03:15.936528 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:03:15.943577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:03:15.947885 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:03:15.954246 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 00:03:15.954499 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 00:03:15.958603 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 4 00:03:15.958869 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 4 00:03:15.959753 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 00:03:15.972873 kernel: scsi host0: ahci Sep 4 00:03:15.973184 kernel: scsi host1: ahci Sep 4 00:03:15.976186 kernel: scsi host2: ahci Sep 4 00:03:15.976413 kernel: scsi host3: ahci Sep 4 00:03:15.977054 kernel: scsi host4: ahci Sep 4 00:03:15.980658 kernel: scsi host5: ahci Sep 4 00:03:15.980901 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 4 00:03:15.980918 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 4 00:03:15.981538 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 4 00:03:15.982424 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 4 00:03:15.983300 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 4 00:03:15.984165 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 4 00:03:15.984148 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 00:03:15.996392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 00:03:16.016256 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 00:03:16.016394 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 00:03:16.026183 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 00:03:16.029253 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 00:03:16.068903 disk-uuid[633]: Primary Header is updated. Sep 4 00:03:16.068903 disk-uuid[633]: Secondary Entries is updated. Sep 4 00:03:16.068903 disk-uuid[633]: Secondary Header is updated. Sep 4 00:03:16.106069 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 00:03:16.106097 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 00:03:16.102933 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:03:16.291905 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 00:03:16.291965 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 00:03:16.293425 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 00:03:16.293576 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 00:03:16.293590 kernel: ata3.00: applying bridge limits Sep 4 00:03:16.295278 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 00:03:16.295311 kernel: ata3.00: configured for UDMA/100 Sep 4 00:03:16.300951 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 00:03:16.300988 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 00:03:16.301002 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 00:03:16.302883 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 00:03:16.303875 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 00:03:16.342385 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 00:03:16.342637 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 00:03:16.358889 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 00:03:16.740608 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 00:03:16.742397 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:03:16.744011 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:03:16.745288 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:03:16.748698 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 00:03:16.783651 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:03:17.078827 disk-uuid[634]: The operation has completed successfully. Sep 4 00:03:17.080307 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 00:03:17.110538 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 00:03:17.110699 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 00:03:17.154833 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 00:03:17.178863 sh[664]: Success Sep 4 00:03:17.208482 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 00:03:17.208583 kernel: device-mapper: uevent: version 1.0.3 Sep 4 00:03:17.210048 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 00:03:17.238478 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 4 00:03:17.325813 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 00:03:17.338498 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 00:03:17.376044 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 00:03:17.399723 kernel: BTRFS: device fsid 8a9c2e34-3d3c-49a9-acce-59bf90003071 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (676) Sep 4 00:03:17.399795 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9c2e34-3d3c-49a9-acce-59bf90003071 Sep 4 00:03:17.400991 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:03:17.431260 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 00:03:17.431357 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 00:03:17.436391 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 00:03:17.440352 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:03:17.451256 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 00:03:17.452694 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 00:03:17.477743 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 00:03:17.555481 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (714) Sep 4 00:03:17.555572 kernel: BTRFS info (device vda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:03:17.555601 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:03:17.571903 kernel: BTRFS info (device vda6): turning on async discard Sep 4 00:03:17.572004 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 00:03:17.579901 kernel: BTRFS info (device vda6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:03:17.585829 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 00:03:17.590115 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 00:03:18.139201 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:03:18.311634 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:03:18.353690 ignition[758]: Ignition 2.21.0 Sep 4 00:03:18.353712 ignition[758]: Stage: fetch-offline Sep 4 00:03:18.353779 ignition[758]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:03:18.353795 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:03:18.354282 ignition[758]: parsed url from cmdline: "" Sep 4 00:03:18.354302 ignition[758]: no config URL provided Sep 4 00:03:18.354324 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:03:18.354360 ignition[758]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:03:18.367276 ignition[758]: op(1): [started] loading QEMU firmware config module Sep 4 00:03:18.367295 ignition[758]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 00:03:18.380832 systemd-networkd[855]: lo: Link UP Sep 4 00:03:18.380844 systemd-networkd[855]: lo: Gained carrier Sep 4 00:03:18.383191 systemd-networkd[855]: Enumeration completed Sep 4 00:03:18.383333 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:03:18.384298 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:03:18.384304 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:03:18.389959 systemd-networkd[855]: eth0: Link UP Sep 4 00:03:18.390254 systemd-networkd[855]: eth0: Gained carrier Sep 4 00:03:18.398874 ignition[758]: op(1): [finished] loading QEMU firmware config module Sep 4 00:03:18.390267 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:03:18.395217 systemd[1]: Reached target network.target - Network. Sep 4 00:03:18.412964 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 00:03:18.460500 ignition[758]: parsing config with SHA512: 3c1fab9cb27619942fccf9a1d6f2382023fff02ad12bde5d9e01f5e6825a9ed0d4afc5828395cc04b2015080a6ba9393c45a4003a8281b8e8d5513c028dd8eee Sep 4 00:03:18.468424 unknown[758]: fetched base config from "system" Sep 4 00:03:18.468445 unknown[758]: fetched user config from "qemu" Sep 4 00:03:18.470689 ignition[758]: fetch-offline: fetch-offline passed Sep 4 00:03:18.470808 ignition[758]: Ignition finished successfully Sep 4 00:03:18.476334 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:03:18.480075 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 00:03:18.483334 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 00:03:18.548689 ignition[863]: Ignition 2.21.0 Sep 4 00:03:18.548710 ignition[863]: Stage: kargs Sep 4 00:03:18.548978 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:03:18.548996 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:03:18.553394 ignition[863]: kargs: kargs passed Sep 4 00:03:18.554126 ignition[863]: Ignition finished successfully Sep 4 00:03:18.559185 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 00:03:18.561470 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 00:03:18.618277 ignition[871]: Ignition 2.21.0 Sep 4 00:03:18.618292 ignition[871]: Stage: disks Sep 4 00:03:18.618427 ignition[871]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:03:18.618438 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:03:18.620358 ignition[871]: disks: disks passed Sep 4 00:03:18.620481 ignition[871]: Ignition finished successfully Sep 4 00:03:18.627646 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 00:03:18.629102 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 00:03:18.631141 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 00:03:18.632498 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:03:18.634748 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:03:18.637168 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:03:18.639533 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 00:03:18.680929 systemd-fsck[881]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 00:03:18.689476 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 00:03:18.692587 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 00:03:18.804870 kernel: EXT4-fs (vda9): mounted filesystem c3518c93-f823-4477-a620-ff9666a59be5 r/w with ordered data mode. Quota mode: none. Sep 4 00:03:18.805216 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 00:03:18.805809 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 00:03:18.809592 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:03:18.811449 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 00:03:18.812572 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 00:03:18.812614 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 00:03:18.812637 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:03:18.830194 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 00:03:18.831549 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 00:03:18.838874 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (889) Sep 4 00:03:18.841382 kernel: BTRFS info (device vda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:03:18.841406 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:03:18.845459 kernel: BTRFS info (device vda6): turning on async discard Sep 4 00:03:18.845506 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 00:03:18.847330 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:03:18.874078 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 00:03:18.878460 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Sep 4 00:03:18.883564 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 00:03:18.887930 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 00:03:19.123809 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 00:03:19.128182 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 00:03:19.131380 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 00:03:19.161287 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 00:03:19.162652 kernel: BTRFS info (device vda6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:03:19.178371 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 00:03:19.208084 ignition[1002]: INFO : Ignition 2.21.0 Sep 4 00:03:19.208084 ignition[1002]: INFO : Stage: mount Sep 4 00:03:19.210182 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:03:19.210182 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:03:19.212684 ignition[1002]: INFO : mount: mount passed Sep 4 00:03:19.212684 ignition[1002]: INFO : Ignition finished successfully Sep 4 00:03:19.215553 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 00:03:19.219512 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 00:03:19.725218 systemd-networkd[855]: eth0: Gained IPv6LL Sep 4 00:03:19.821352 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:03:19.862963 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Sep 4 00:03:19.865482 kernel: BTRFS info (device vda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:03:19.865535 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:03:19.884075 kernel: BTRFS info (device vda6): turning on async discard Sep 4 00:03:19.884168 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 00:03:19.896558 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:03:19.981584 ignition[1033]: INFO : Ignition 2.21.0 Sep 4 00:03:19.981584 ignition[1033]: INFO : Stage: files Sep 4 00:03:19.984044 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:03:19.984044 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:03:20.002169 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Sep 4 00:03:20.006908 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 00:03:20.006908 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 00:03:20.015194 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 00:03:20.017185 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 00:03:20.019222 unknown[1033]: wrote ssh authorized keys file for user: core Sep 4 00:03:20.023489 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 00:03:20.026151 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 00:03:20.026151 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 00:03:20.097009 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 00:03:21.861103 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 00:03:21.863602 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 00:03:21.863602 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 00:03:21.979094 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 00:03:22.378469 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 00:03:22.381130 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 00:03:22.381130 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 00:03:22.381130 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:03:22.381130 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:03:22.381130 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:03:22.381130 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:03:22.381130 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:03:22.381130 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:03:22.405675 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:03:22.408507 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:03:22.408507 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:03:22.414004 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:03:22.417049 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:03:22.419741 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 00:03:22.811746 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 00:03:23.345560 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:03:23.345560 ignition[1033]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 00:03:23.350398 ignition[1033]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:03:23.358822 ignition[1033]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:03:23.358822 ignition[1033]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 00:03:23.358822 ignition[1033]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 00:03:23.364520 ignition[1033]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 00:03:23.364520 ignition[1033]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 00:03:23.364520 ignition[1033]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 00:03:23.364520 ignition[1033]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 00:03:23.421482 ignition[1033]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 00:03:23.446053 ignition[1033]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 00:03:23.447951 ignition[1033]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 00:03:23.447951 ignition[1033]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 00:03:23.447951 ignition[1033]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 00:03:23.447951 ignition[1033]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:03:23.447951 ignition[1033]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:03:23.447951 ignition[1033]: INFO : files: files passed Sep 4 00:03:23.447951 ignition[1033]: INFO : Ignition finished successfully Sep 4 00:03:23.461217 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 00:03:23.463723 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 00:03:23.466963 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 00:03:23.596056 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 00:03:23.596218 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 00:03:23.600201 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 00:03:23.604654 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:03:23.604654 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:03:23.609885 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:03:23.608084 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:03:23.610484 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 00:03:23.613940 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 00:03:23.674014 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 00:03:23.674208 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 00:03:23.677449 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 00:03:23.679356 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 00:03:23.681920 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 00:03:23.683301 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 00:03:23.723733 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:03:23.728354 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 00:03:23.758699 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:03:23.760160 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:03:23.760486 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 00:03:23.760901 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 00:03:23.761087 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:03:23.761865 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 00:03:23.762441 systemd[1]: Stopped target basic.target - Basic System. Sep 4 00:03:23.762823 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 00:03:23.763410 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:03:23.763809 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 00:03:23.764362 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:03:23.764743 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 00:03:23.765330 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:03:23.765721 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 00:03:23.766304 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 00:03:23.766664 systemd[1]: Stopped target swap.target - Swaps. Sep 4 00:03:23.767214 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 00:03:23.767373 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:03:23.795156 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:03:23.796506 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:03:23.797698 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 00:03:23.800127 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:03:23.803941 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 00:03:23.804131 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 00:03:23.807487 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 00:03:23.807649 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:03:23.811281 systemd[1]: Stopped target paths.target - Path Units. Sep 4 00:03:23.811433 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 00:03:23.816953 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:03:23.819995 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 00:03:23.820193 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 00:03:23.823983 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 00:03:23.824124 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:03:23.825180 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 00:03:23.825301 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:03:23.827170 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 00:03:23.827288 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:03:23.829109 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 00:03:23.829225 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 00:03:23.835031 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 00:03:23.836019 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 00:03:23.836138 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:03:23.837281 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 00:03:23.841105 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 00:03:23.841274 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:03:23.843623 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 00:03:23.843776 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:03:23.852806 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 00:03:23.853012 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 00:03:23.874377 ignition[1087]: INFO : Ignition 2.21.0 Sep 4 00:03:23.874377 ignition[1087]: INFO : Stage: umount Sep 4 00:03:23.876601 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:03:23.876601 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:03:23.876601 ignition[1087]: INFO : umount: umount passed Sep 4 00:03:23.876601 ignition[1087]: INFO : Ignition finished successfully Sep 4 00:03:23.880958 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 00:03:23.881937 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 00:03:23.882111 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 00:03:23.884439 systemd[1]: Stopped target network.target - Network. Sep 4 00:03:23.888442 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 00:03:23.889503 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 00:03:23.891776 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 00:03:23.891991 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 00:03:23.895070 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 00:03:23.895194 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 00:03:23.895343 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 00:03:23.895400 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 00:03:23.898437 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 00:03:23.899343 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 00:03:23.901640 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 00:03:23.901826 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 00:03:23.904397 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 00:03:23.904549 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 00:03:23.912998 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 00:03:23.913149 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 00:03:23.919178 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 00:03:23.919461 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 00:03:23.919587 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 00:03:23.922979 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 00:03:23.923997 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 00:03:23.926202 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 00:03:23.926257 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:03:23.929488 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 00:03:23.931292 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 00:03:23.931347 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:03:23.932495 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:03:23.932548 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:03:23.937347 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 00:03:23.937413 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 00:03:23.938808 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 00:03:23.938899 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:03:23.944507 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:03:23.946599 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 00:03:23.946666 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:03:23.968877 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 00:03:23.969069 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:03:23.971707 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 00:03:23.971891 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 00:03:23.974469 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 00:03:23.974571 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 00:03:23.976305 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 00:03:23.976346 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:03:23.978255 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 00:03:23.978310 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:03:23.981427 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 00:03:23.981492 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 00:03:23.985549 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 00:03:23.985608 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:03:23.989264 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 00:03:23.990048 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 00:03:23.990114 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:03:23.994645 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 00:03:23.994709 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:03:23.996995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:03:23.997062 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:03:24.002095 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 00:03:24.002174 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 00:03:24.002243 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:03:24.016751 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 00:03:24.016985 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 00:03:24.019574 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 00:03:24.021556 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 00:03:24.035213 systemd[1]: Switching root. Sep 4 00:03:24.083065 systemd-journald[220]: Journal stopped Sep 4 00:03:26.167291 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 4 00:03:26.167375 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 00:03:26.167398 kernel: SELinux: policy capability open_perms=1 Sep 4 00:03:26.167414 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 00:03:26.167432 kernel: SELinux: policy capability always_check_network=0 Sep 4 00:03:26.167445 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 00:03:26.167460 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 00:03:26.167475 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 00:03:26.167489 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 00:03:26.167503 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 00:03:26.167518 kernel: audit: type=1403 audit(1756944204.858:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 00:03:26.167534 systemd[1]: Successfully loaded SELinux policy in 53.140ms. Sep 4 00:03:26.167569 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.550ms. Sep 4 00:03:26.167589 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:03:26.167605 systemd[1]: Detected virtualization kvm. Sep 4 00:03:26.167621 systemd[1]: Detected architecture x86-64. Sep 4 00:03:26.167637 systemd[1]: Detected first boot. Sep 4 00:03:26.167652 systemd[1]: Initializing machine ID from VM UUID. Sep 4 00:03:26.167668 zram_generator::config[1132]: No configuration found. Sep 4 00:03:26.167685 kernel: Guest personality initialized and is inactive Sep 4 00:03:26.167700 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 00:03:26.167717 kernel: Initialized host personality Sep 4 00:03:26.167742 kernel: NET: Registered PF_VSOCK protocol family Sep 4 00:03:26.167760 systemd[1]: Populated /etc with preset unit settings. Sep 4 00:03:26.167778 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 00:03:26.167795 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 00:03:26.167809 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 00:03:26.167824 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 00:03:26.167838 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 00:03:26.167936 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 00:03:26.167959 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 00:03:26.167977 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 00:03:26.167994 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 00:03:26.168011 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 00:03:26.168029 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 00:03:26.168046 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 00:03:26.168063 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:03:26.168080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:03:26.168095 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 00:03:26.168113 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 00:03:26.168128 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 00:03:26.168145 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:03:26.168162 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 00:03:26.168177 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:03:26.168193 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:03:26.168209 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 00:03:26.168228 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 00:03:26.168244 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 00:03:26.168260 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 00:03:26.168275 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:03:26.168291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:03:26.168308 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:03:26.168322 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:03:26.168338 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 00:03:26.168354 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 00:03:26.168373 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 00:03:26.168389 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:03:26.168405 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:03:26.168421 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:03:26.168436 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 00:03:26.168452 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 00:03:26.168468 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 00:03:26.168485 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 00:03:26.168499 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:26.168517 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 00:03:26.168532 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 00:03:26.168549 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 00:03:26.168565 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 00:03:26.168581 systemd[1]: Reached target machines.target - Containers. Sep 4 00:03:26.168597 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 00:03:26.168613 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:03:26.168628 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:03:26.168643 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 00:03:26.168662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:03:26.168677 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:03:26.168693 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:03:26.168710 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 00:03:26.168736 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:03:26.168754 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 00:03:26.168771 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 00:03:26.168787 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 00:03:26.168807 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 00:03:26.168822 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 00:03:26.168839 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:03:26.168871 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:03:26.168887 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:03:26.168904 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:03:26.168919 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 00:03:26.168934 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 00:03:26.168951 kernel: ACPI: bus type drm_connector registered Sep 4 00:03:26.168976 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:03:26.168993 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 00:03:26.169011 systemd[1]: Stopped verity-setup.service. Sep 4 00:03:26.169027 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:26.169047 kernel: fuse: init (API version 7.41) Sep 4 00:03:26.169061 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 00:03:26.169077 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 00:03:26.169093 kernel: loop: module loaded Sep 4 00:03:26.169111 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 00:03:26.169126 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 00:03:26.169145 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 00:03:26.169161 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 00:03:26.169176 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:03:26.169191 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 00:03:26.169238 systemd-journald[1203]: Collecting audit messages is disabled. Sep 4 00:03:26.169268 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 00:03:26.169283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:03:26.169299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:03:26.169318 systemd-journald[1203]: Journal started Sep 4 00:03:26.169346 systemd-journald[1203]: Runtime Journal (/run/log/journal/a83164fc9b064d6f81d65e67321f6382) is 6M, max 48.6M, 42.5M free. Sep 4 00:03:25.874221 systemd[1]: Queued start job for default target multi-user.target. Sep 4 00:03:26.172783 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 00:03:26.172822 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:03:25.895268 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 00:03:25.895867 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 00:03:26.176254 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:03:26.177381 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:03:26.180541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:03:26.183072 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:03:26.185215 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 00:03:26.185483 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 00:03:26.187344 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:03:26.187612 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:03:26.189459 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:03:26.191348 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:03:26.196421 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 00:03:26.198392 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 00:03:26.219142 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:03:26.227637 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 00:03:26.231770 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 00:03:26.233425 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 00:03:26.233471 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:03:26.236779 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 00:03:26.242628 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 00:03:26.253369 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:03:26.256595 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 00:03:26.272579 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 00:03:26.276348 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:03:26.280054 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 00:03:26.285935 systemd-journald[1203]: Time spent on flushing to /var/log/journal/a83164fc9b064d6f81d65e67321f6382 is 14.582ms for 983 entries. Sep 4 00:03:26.285935 systemd-journald[1203]: System Journal (/var/log/journal/a83164fc9b064d6f81d65e67321f6382) is 8M, max 195.6M, 187.6M free. Sep 4 00:03:26.686102 systemd-journald[1203]: Received client request to flush runtime journal. Sep 4 00:03:26.686233 kernel: loop0: detected capacity change from 0 to 113872 Sep 4 00:03:26.686266 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 00:03:26.686293 kernel: loop1: detected capacity change from 0 to 224512 Sep 4 00:03:26.283004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:03:26.290764 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:03:26.294500 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 00:03:26.328084 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 00:03:26.332222 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:03:26.338361 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 00:03:26.340008 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 00:03:26.440089 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:03:26.563765 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 00:03:26.566375 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 00:03:26.570988 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 00:03:26.572825 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 00:03:26.576961 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:03:26.689238 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 00:03:26.701081 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Sep 4 00:03:26.701554 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Sep 4 00:03:26.703893 kernel: loop2: detected capacity change from 0 to 146240 Sep 4 00:03:26.707309 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 00:03:26.709504 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:03:26.741024 kernel: loop3: detected capacity change from 0 to 113872 Sep 4 00:03:26.753894 kernel: loop4: detected capacity change from 0 to 224512 Sep 4 00:03:26.765167 kernel: loop5: detected capacity change from 0 to 146240 Sep 4 00:03:26.781748 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 00:03:26.782423 (sd-merge)[1274]: Merged extensions into '/usr'. Sep 4 00:03:26.787171 systemd[1]: Reload requested from client PID 1251 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 00:03:26.787279 systemd[1]: Reloading... Sep 4 00:03:26.843888 zram_generator::config[1299]: No configuration found. Sep 4 00:03:26.944747 ldconfig[1246]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 00:03:26.961319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:03:27.115752 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 00:03:27.116218 systemd[1]: Reloading finished in 328 ms. Sep 4 00:03:27.174487 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 00:03:27.177309 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 00:03:27.206223 systemd[1]: Starting ensure-sysext.service... Sep 4 00:03:27.210866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:03:27.246124 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... Sep 4 00:03:27.246149 systemd[1]: Reloading... Sep 4 00:03:27.261176 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 00:03:27.261234 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 00:03:27.261638 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 00:03:27.262017 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 00:03:27.263442 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 00:03:27.263948 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 4 00:03:27.264139 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 4 00:03:27.270421 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:03:27.270598 systemd-tmpfiles[1338]: Skipping /boot Sep 4 00:03:27.288821 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:03:27.289033 systemd-tmpfiles[1338]: Skipping /boot Sep 4 00:03:27.361054 zram_generator::config[1365]: No configuration found. Sep 4 00:03:27.526279 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:03:27.664447 systemd[1]: Reloading finished in 417 ms. Sep 4 00:03:27.693818 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 00:03:27.728363 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:03:27.742444 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:03:27.746676 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 00:03:27.751532 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 00:03:27.762582 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:03:27.767503 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:03:27.771526 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 00:03:27.776898 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:27.777145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:03:27.785138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:03:27.792415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:03:27.797252 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:03:27.798885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:03:27.799031 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:03:27.808448 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 00:03:27.809914 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:27.813166 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 00:03:27.816169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:03:27.816664 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:03:27.819365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:03:27.819794 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:03:27.822209 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:03:27.826276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:03:27.841206 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:27.842582 systemd-udevd[1409]: Using default interface naming scheme 'v255'. Sep 4 00:03:27.843041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:03:27.847072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:03:27.852315 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:03:27.864493 augenrules[1439]: No rules Sep 4 00:03:27.866506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:03:27.867894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:03:27.868025 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:03:27.870280 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 00:03:27.871497 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:27.874334 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 00:03:27.876814 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:03:27.877119 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:03:27.879585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:03:27.881524 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 00:03:27.884173 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:03:27.884469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:03:27.887640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:03:27.887976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:03:27.890367 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 00:03:27.892517 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:03:27.892781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:03:27.910561 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 00:03:27.927763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:27.937033 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:03:27.938240 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:03:27.941156 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:03:27.946677 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:03:27.956914 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:03:27.960105 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:03:27.961476 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:03:27.961523 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:03:27.964987 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:03:27.966310 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 00:03:27.966352 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:27.975117 systemd[1]: Finished ensure-sysext.service. Sep 4 00:03:27.976453 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:03:27.976712 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:03:27.978370 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:03:27.978602 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:03:27.980410 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:03:27.980641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:03:27.983967 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:03:27.984203 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:03:27.987377 augenrules[1485]: /sbin/augenrules: No change Sep 4 00:03:27.996216 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:03:27.996306 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:03:28.003151 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 00:03:28.016899 augenrules[1514]: No rules Sep 4 00:03:28.017759 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:03:28.021192 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:03:28.046055 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 00:03:28.053583 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 00:03:28.059399 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 00:03:28.095190 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 00:03:28.101875 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 00:03:28.118899 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 00:03:28.123883 kernel: ACPI: button: Power Button [PWRF] Sep 4 00:03:28.173116 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 00:03:28.173433 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 00:03:28.173753 systemd-networkd[1492]: lo: Link UP Sep 4 00:03:28.174131 systemd-networkd[1492]: lo: Gained carrier Sep 4 00:03:28.178072 systemd-networkd[1492]: Enumeration completed Sep 4 00:03:28.178271 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:03:28.178812 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:03:28.179418 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:03:28.180177 systemd-networkd[1492]: eth0: Link UP Sep 4 00:03:28.180389 systemd-networkd[1492]: eth0: Gained carrier Sep 4 00:03:28.180466 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:03:28.181508 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 00:03:28.186486 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 00:03:28.190919 systemd-networkd[1492]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 00:03:28.195488 systemd-resolved[1407]: Positive Trust Anchors: Sep 4 00:03:28.196210 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:03:28.196301 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:03:28.200260 systemd-resolved[1407]: Defaulting to hostname 'linux'. Sep 4 00:03:28.202614 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:03:28.203894 systemd[1]: Reached target network.target - Network. Sep 4 00:03:28.204875 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:03:28.238705 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 00:03:28.245551 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 00:03:28.247049 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:03:28.248449 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 00:03:28.800181 systemd-timesyncd[1513]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 00:03:28.800230 systemd-timesyncd[1513]: Initial clock synchronization to Thu 2025-09-04 00:03:28.800076 UTC. Sep 4 00:03:28.800550 systemd-resolved[1407]: Clock change detected. Flushing caches. Sep 4 00:03:28.800877 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 00:03:28.802158 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 00:03:28.803454 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 00:03:28.804777 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 00:03:28.804815 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:03:28.805828 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 00:03:28.807950 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 00:03:28.809162 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 00:03:28.810442 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:03:28.812469 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 00:03:28.815328 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 00:03:28.822718 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 00:03:28.824223 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 00:03:28.825551 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 00:03:28.864369 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 00:03:28.867498 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 00:03:28.870457 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 00:03:28.878130 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:03:28.879467 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:03:28.880803 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:03:28.880939 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:03:28.883932 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 00:03:28.892904 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 00:03:28.913065 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 00:03:28.917346 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 00:03:28.924042 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 00:03:28.925447 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 00:03:28.929110 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 00:03:28.935098 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 00:03:28.937932 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 00:03:28.950684 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 00:03:28.957086 jq[1559]: false Sep 4 00:03:28.959095 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 00:03:28.968393 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing passwd entry cache Sep 4 00:03:28.967777 oslogin_cache_refresh[1561]: Refreshing passwd entry cache Sep 4 00:03:28.973190 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 00:03:28.976759 extend-filesystems[1560]: Found /dev/vda6 Sep 4 00:03:28.979829 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 00:03:28.982552 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting users, quitting Sep 4 00:03:28.982552 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:03:28.982552 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing group entry cache Sep 4 00:03:28.981709 oslogin_cache_refresh[1561]: Failure getting users, quitting Sep 4 00:03:28.981735 oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:03:28.981805 oslogin_cache_refresh[1561]: Refreshing group entry cache Sep 4 00:03:28.983254 extend-filesystems[1560]: Found /dev/vda9 Sep 4 00:03:28.985245 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 00:03:28.986802 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 00:03:28.991483 extend-filesystems[1560]: Checking size of /dev/vda9 Sep 4 00:03:29.003837 kernel: kvm_amd: TSC scaling supported Sep 4 00:03:29.003886 kernel: kvm_amd: Nested Virtualization enabled Sep 4 00:03:29.003915 kernel: kvm_amd: Nested Paging enabled Sep 4 00:03:29.003979 kernel: kvm_amd: LBR virtualization supported Sep 4 00:03:29.004014 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 00:03:29.004063 kernel: kvm_amd: Virtual GIF supported Sep 4 00:03:28.998011 oslogin_cache_refresh[1561]: Failure getting groups, quitting Sep 4 00:03:29.004203 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting groups, quitting Sep 4 00:03:29.004203 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:03:28.998033 oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:03:29.015402 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 00:03:29.035896 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 00:03:29.038903 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 00:03:29.111358 update_engine[1573]: I20250904 00:03:29.100562 1573 main.cc:92] Flatcar Update Engine starting Sep 4 00:03:29.041615 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 00:03:29.042496 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 00:03:29.042975 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 00:03:29.057709 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 00:03:29.058219 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 00:03:29.128330 jq[1578]: true Sep 4 00:03:29.137334 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 00:03:29.178543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:03:29.186047 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 00:03:29.188858 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 00:03:29.192884 extend-filesystems[1560]: Resized partition /dev/vda9 Sep 4 00:03:29.214242 extend-filesystems[1601]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 00:03:29.228741 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 00:03:29.240727 jq[1598]: true Sep 4 00:03:29.244752 tar[1584]: linux-amd64/LICENSE Sep 4 00:03:29.245143 tar[1584]: linux-amd64/helm Sep 4 00:03:29.318725 dbus-daemon[1557]: [system] SELinux support is enabled Sep 4 00:03:29.319392 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 00:03:29.381008 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 00:03:29.440152 update_engine[1573]: I20250904 00:03:29.377568 1573 update_check_scheduler.cc:74] Next update check in 5m23s Sep 4 00:03:29.353975 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 00:03:29.354020 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 00:03:29.354186 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 00:03:29.354207 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 00:03:29.375286 systemd[1]: Started update-engine.service - Update Engine. Sep 4 00:03:29.381531 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 00:03:29.439893 systemd-logind[1569]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 00:03:29.439924 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 00:03:29.446188 extend-filesystems[1601]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 00:03:29.446188 extend-filesystems[1601]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 00:03:29.446188 extend-filesystems[1601]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 00:03:29.451474 extend-filesystems[1560]: Resized filesystem in /dev/vda9 Sep 4 00:03:29.448679 systemd-logind[1569]: New seat seat0. Sep 4 00:03:29.467564 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 00:03:29.469644 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 00:03:29.470137 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 00:03:29.806812 bash[1623]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:03:29.820738 sshd_keygen[1603]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 00:03:29.926527 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 00:03:30.093685 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 00:03:30.329723 systemd-networkd[1492]: eth0: Gained IPv6LL Sep 4 00:03:30.414107 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 00:03:30.418055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:03:30.422797 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 00:03:30.911221 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 00:03:30.913350 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 00:03:30.915721 kernel: EDAC MC: Ver: 3.0.0 Sep 4 00:03:30.923980 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 00:03:30.936819 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 00:03:30.944972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:30.964863 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 00:03:30.974734 systemd[1]: Started sshd@0-10.0.0.105:22-10.0.0.1:44822.service - OpenSSH per-connection server daemon (10.0.0.1:44822). Sep 4 00:03:30.977140 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 00:03:31.014099 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 00:03:31.014612 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 00:03:31.028948 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 00:03:31.038405 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 00:03:31.040008 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 00:03:31.045021 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 00:03:31.085896 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 00:03:31.102897 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 00:03:31.113128 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 00:03:31.116571 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 00:03:31.134525 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 00:03:31.152450 containerd[1585]: time="2025-09-04T00:03:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 00:03:31.153589 containerd[1585]: time="2025-09-04T00:03:31.153531672Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 4 00:03:31.174583 containerd[1585]: time="2025-09-04T00:03:31.173263078Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.752µs" Sep 4 00:03:31.174583 containerd[1585]: time="2025-09-04T00:03:31.173314464Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 00:03:31.174583 containerd[1585]: time="2025-09-04T00:03:31.173335724Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 00:03:31.174583 containerd[1585]: time="2025-09-04T00:03:31.173720005Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 00:03:31.174583 containerd[1585]: time="2025-09-04T00:03:31.173748378Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 00:03:31.174583 containerd[1585]: time="2025-09-04T00:03:31.173800265Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:03:31.174583 containerd[1585]: time="2025-09-04T00:03:31.173927925Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:03:31.174583 containerd[1585]: time="2025-09-04T00:03:31.173952330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:03:31.174583 containerd[1585]: time="2025-09-04T00:03:31.174441227Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:03:31.174583 containerd[1585]: time="2025-09-04T00:03:31.174480851Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:03:31.174583 containerd[1585]: time="2025-09-04T00:03:31.174507441Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:03:31.174583 containerd[1585]: time="2025-09-04T00:03:31.174527469Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 00:03:31.175049 containerd[1585]: time="2025-09-04T00:03:31.174751559Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 00:03:31.175431 containerd[1585]: time="2025-09-04T00:03:31.175320305Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:03:31.175431 containerd[1585]: time="2025-09-04T00:03:31.175394214Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:03:31.175431 containerd[1585]: time="2025-09-04T00:03:31.175417638Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 00:03:31.177670 containerd[1585]: time="2025-09-04T00:03:31.177589069Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 00:03:31.178297 containerd[1585]: time="2025-09-04T00:03:31.178211937Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 00:03:31.178347 containerd[1585]: time="2025-09-04T00:03:31.178321202Z" level=info msg="metadata content store policy set" policy=shared Sep 4 00:03:31.196225 containerd[1585]: time="2025-09-04T00:03:31.196126878Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 00:03:31.196391 containerd[1585]: time="2025-09-04T00:03:31.196253545Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 00:03:31.196391 containerd[1585]: time="2025-09-04T00:03:31.196281187Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 00:03:31.196391 containerd[1585]: time="2025-09-04T00:03:31.196296696Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 00:03:31.196391 containerd[1585]: time="2025-09-04T00:03:31.196311173Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 00:03:31.196391 containerd[1585]: time="2025-09-04T00:03:31.196324117Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 00:03:31.196391 containerd[1585]: time="2025-09-04T00:03:31.196350176Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 00:03:31.196391 containerd[1585]: time="2025-09-04T00:03:31.196365485Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 00:03:31.196391 containerd[1585]: time="2025-09-04T00:03:31.196386835Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 00:03:31.196391 containerd[1585]: time="2025-09-04T00:03:31.196400791Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 00:03:31.196673 containerd[1585]: time="2025-09-04T00:03:31.196413034Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 00:03:31.196673 containerd[1585]: time="2025-09-04T00:03:31.196428894Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 00:03:31.196673 containerd[1585]: time="2025-09-04T00:03:31.196653675Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 00:03:31.196673 containerd[1585]: time="2025-09-04T00:03:31.196676468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 00:03:31.199892 containerd[1585]: time="2025-09-04T00:03:31.199545858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 00:03:31.199892 containerd[1585]: time="2025-09-04T00:03:31.199709645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 00:03:31.199892 containerd[1585]: time="2025-09-04T00:03:31.199727148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 00:03:31.199892 containerd[1585]: time="2025-09-04T00:03:31.199757615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 00:03:31.199892 containerd[1585]: time="2025-09-04T00:03:31.199773154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 00:03:31.199892 containerd[1585]: time="2025-09-04T00:03:31.199811536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 00:03:31.199892 containerd[1585]: time="2025-09-04T00:03:31.199826304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 00:03:31.199892 containerd[1585]: time="2025-09-04T00:03:31.199837876Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 00:03:31.199892 containerd[1585]: time="2025-09-04T00:03:31.199851902Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 00:03:31.199892 containerd[1585]: time="2025-09-04T00:03:31.199939506Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 00:03:31.199892 containerd[1585]: time="2025-09-04T00:03:31.199955526Z" level=info msg="Start snapshots syncer" Sep 4 00:03:31.199892 containerd[1585]: time="2025-09-04T00:03:31.199984260Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 00:03:31.200521 containerd[1585]: time="2025-09-04T00:03:31.200281347Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 00:03:31.200521 containerd[1585]: time="2025-09-04T00:03:31.200352631Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 00:03:31.200801 containerd[1585]: time="2025-09-04T00:03:31.200475662Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 00:03:31.202321 containerd[1585]: time="2025-09-04T00:03:31.202161773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 00:03:31.202489 containerd[1585]: time="2025-09-04T00:03:31.202443842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 00:03:31.202489 containerd[1585]: time="2025-09-04T00:03:31.202481162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 00:03:31.202489 containerd[1585]: time="2025-09-04T00:03:31.202498915Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 00:03:31.202489 containerd[1585]: time="2025-09-04T00:03:31.202520606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 00:03:31.202489 containerd[1585]: time="2025-09-04T00:03:31.202588283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 00:03:31.202489 containerd[1585]: time="2025-09-04T00:03:31.202610084Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 00:03:31.203001 containerd[1585]: time="2025-09-04T00:03:31.202767599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 00:03:31.203001 containerd[1585]: time="2025-09-04T00:03:31.202869610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 00:03:31.203001 containerd[1585]: time="2025-09-04T00:03:31.202956693Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 00:03:31.204274 containerd[1585]: time="2025-09-04T00:03:31.203119128Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:03:31.204274 containerd[1585]: time="2025-09-04T00:03:31.203279278Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:03:31.204274 containerd[1585]: time="2025-09-04T00:03:31.203394094Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:03:31.204274 containerd[1585]: time="2025-09-04T00:03:31.203419391Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:03:31.204274 containerd[1585]: time="2025-09-04T00:03:31.203477280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 00:03:31.204274 containerd[1585]: time="2025-09-04T00:03:31.203512095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 00:03:31.204274 containerd[1585]: time="2025-09-04T00:03:31.203634254Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 00:03:31.204274 containerd[1585]: time="2025-09-04T00:03:31.203684147Z" level=info msg="runtime interface created" Sep 4 00:03:31.204274 containerd[1585]: time="2025-09-04T00:03:31.203719664Z" level=info msg="created NRI interface" Sep 4 00:03:31.204274 containerd[1585]: time="2025-09-04T00:03:31.203771211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 00:03:31.204274 containerd[1585]: time="2025-09-04T00:03:31.203819571Z" level=info msg="Connect containerd service" Sep 4 00:03:31.204274 containerd[1585]: time="2025-09-04T00:03:31.203897738Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 00:03:31.212264 containerd[1585]: time="2025-09-04T00:03:31.212045555Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:03:31.218990 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 44822 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:31.222804 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:31.245215 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 00:03:31.255308 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 00:03:31.286177 systemd-logind[1569]: New session 1 of user core. Sep 4 00:03:31.323038 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 00:03:31.337170 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 00:03:31.388829 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 00:03:31.393667 systemd-logind[1569]: New session c1 of user core. Sep 4 00:03:31.470754 containerd[1585]: time="2025-09-04T00:03:31.470657663Z" level=info msg="Start subscribing containerd event" Sep 4 00:03:31.471000 containerd[1585]: time="2025-09-04T00:03:31.470962905Z" level=info msg="Start recovering state" Sep 4 00:03:31.471169 containerd[1585]: time="2025-09-04T00:03:31.470840445Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 00:03:31.471728 containerd[1585]: time="2025-09-04T00:03:31.471161898Z" level=info msg="Start event monitor" Sep 4 00:03:31.471728 containerd[1585]: time="2025-09-04T00:03:31.471436583Z" level=info msg="Start cni network conf syncer for default" Sep 4 00:03:31.471728 containerd[1585]: time="2025-09-04T00:03:31.471462442Z" level=info msg="Start streaming server" Sep 4 00:03:31.471728 containerd[1585]: time="2025-09-04T00:03:31.471475506Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 00:03:31.471728 containerd[1585]: time="2025-09-04T00:03:31.471485084Z" level=info msg="runtime interface starting up..." Sep 4 00:03:31.471728 containerd[1585]: time="2025-09-04T00:03:31.471493229Z" level=info msg="starting plugins..." Sep 4 00:03:31.471728 containerd[1585]: time="2025-09-04T00:03:31.471523717Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 00:03:31.472157 containerd[1585]: time="2025-09-04T00:03:31.472019436Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 00:03:31.472296 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 00:03:31.474677 containerd[1585]: time="2025-09-04T00:03:31.474645219Z" level=info msg="containerd successfully booted in 0.323054s" Sep 4 00:03:31.507755 tar[1584]: linux-amd64/README.md Sep 4 00:03:31.536173 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 00:03:31.613722 systemd[1692]: Queued start job for default target default.target. Sep 4 00:03:31.627676 systemd[1692]: Created slice app.slice - User Application Slice. Sep 4 00:03:31.627736 systemd[1692]: Reached target paths.target - Paths. Sep 4 00:03:31.627793 systemd[1692]: Reached target timers.target - Timers. Sep 4 00:03:31.629931 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 00:03:31.644439 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 00:03:31.644621 systemd[1692]: Reached target sockets.target - Sockets. Sep 4 00:03:31.644672 systemd[1692]: Reached target basic.target - Basic System. Sep 4 00:03:31.644741 systemd[1692]: Reached target default.target - Main User Target. Sep 4 00:03:31.644792 systemd[1692]: Startup finished in 231ms. Sep 4 00:03:31.645502 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 00:03:31.655950 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 00:03:31.724167 systemd[1]: Started sshd@1-10.0.0.105:22-10.0.0.1:53956.service - OpenSSH per-connection server daemon (10.0.0.1:53956). Sep 4 00:03:31.781202 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 53956 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:31.783095 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:31.788393 systemd-logind[1569]: New session 2 of user core. Sep 4 00:03:31.803855 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 00:03:31.862490 sshd[1715]: Connection closed by 10.0.0.1 port 53956 Sep 4 00:03:31.862844 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:31.875426 systemd[1]: sshd@1-10.0.0.105:22-10.0.0.1:53956.service: Deactivated successfully. Sep 4 00:03:31.877256 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 00:03:31.878157 systemd-logind[1569]: Session 2 logged out. Waiting for processes to exit. Sep 4 00:03:31.881316 systemd[1]: Started sshd@2-10.0.0.105:22-10.0.0.1:53966.service - OpenSSH per-connection server daemon (10.0.0.1:53966). Sep 4 00:03:31.883763 systemd-logind[1569]: Removed session 2. Sep 4 00:03:31.932476 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 53966 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:31.934100 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:31.938803 systemd-logind[1569]: New session 3 of user core. Sep 4 00:03:31.952870 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 00:03:32.008977 sshd[1723]: Connection closed by 10.0.0.1 port 53966 Sep 4 00:03:32.010811 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:32.014872 systemd[1]: sshd@2-10.0.0.105:22-10.0.0.1:53966.service: Deactivated successfully. Sep 4 00:03:32.016805 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 00:03:32.017654 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Sep 4 00:03:32.019504 systemd-logind[1569]: Removed session 3. Sep 4 00:03:32.137274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:32.160766 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 00:03:32.161210 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:03:32.162295 systemd[1]: Startup finished in 4.328s (kernel) + 10.224s (initrd) + 6.805s (userspace) = 21.358s. Sep 4 00:03:33.166668 kubelet[1733]: E0904 00:03:33.166590 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:03:33.170874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:03:33.171102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:03:33.171566 systemd[1]: kubelet.service: Consumed 1.398s CPU time, 264.9M memory peak. Sep 4 00:03:42.034644 systemd[1]: Started sshd@3-10.0.0.105:22-10.0.0.1:47744.service - OpenSSH per-connection server daemon (10.0.0.1:47744). Sep 4 00:03:42.091207 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 47744 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:42.092941 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:42.098621 systemd-logind[1569]: New session 4 of user core. Sep 4 00:03:42.107948 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 00:03:42.163593 sshd[1748]: Connection closed by 10.0.0.1 port 47744 Sep 4 00:03:42.164084 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:42.180391 systemd[1]: sshd@3-10.0.0.105:22-10.0.0.1:47744.service: Deactivated successfully. Sep 4 00:03:42.182464 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 00:03:42.183318 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Sep 4 00:03:42.186832 systemd[1]: Started sshd@4-10.0.0.105:22-10.0.0.1:47750.service - OpenSSH per-connection server daemon (10.0.0.1:47750). Sep 4 00:03:42.187388 systemd-logind[1569]: Removed session 4. Sep 4 00:03:42.236930 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 47750 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:42.238601 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:42.243601 systemd-logind[1569]: New session 5 of user core. Sep 4 00:03:42.258835 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 00:03:42.309541 sshd[1756]: Connection closed by 10.0.0.1 port 47750 Sep 4 00:03:42.309915 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:42.324013 systemd[1]: sshd@4-10.0.0.105:22-10.0.0.1:47750.service: Deactivated successfully. Sep 4 00:03:42.325989 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 00:03:42.326820 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Sep 4 00:03:42.329967 systemd[1]: Started sshd@5-10.0.0.105:22-10.0.0.1:47760.service - OpenSSH per-connection server daemon (10.0.0.1:47760). Sep 4 00:03:42.330839 systemd-logind[1569]: Removed session 5. Sep 4 00:03:42.383122 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 47760 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:42.384794 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:42.390878 systemd-logind[1569]: New session 6 of user core. Sep 4 00:03:42.400906 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 00:03:42.457561 sshd[1765]: Connection closed by 10.0.0.1 port 47760 Sep 4 00:03:42.457863 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:42.477320 systemd[1]: sshd@5-10.0.0.105:22-10.0.0.1:47760.service: Deactivated successfully. Sep 4 00:03:42.479375 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 00:03:42.480222 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Sep 4 00:03:42.483608 systemd[1]: Started sshd@6-10.0.0.105:22-10.0.0.1:47766.service - OpenSSH per-connection server daemon (10.0.0.1:47766). Sep 4 00:03:42.484423 systemd-logind[1569]: Removed session 6. Sep 4 00:03:42.541994 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 47766 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:42.543782 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:42.549668 systemd-logind[1569]: New session 7 of user core. Sep 4 00:03:42.563895 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 00:03:42.628646 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 00:03:42.628992 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:03:42.655632 sudo[1774]: pam_unix(sudo:session): session closed for user root Sep 4 00:03:42.657530 sshd[1773]: Connection closed by 10.0.0.1 port 47766 Sep 4 00:03:42.657944 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:42.672893 systemd[1]: sshd@6-10.0.0.105:22-10.0.0.1:47766.service: Deactivated successfully. Sep 4 00:03:42.674683 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 00:03:42.675588 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Sep 4 00:03:42.678832 systemd[1]: Started sshd@7-10.0.0.105:22-10.0.0.1:47778.service - OpenSSH per-connection server daemon (10.0.0.1:47778). Sep 4 00:03:42.679368 systemd-logind[1569]: Removed session 7. Sep 4 00:03:42.754893 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 47778 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:42.756729 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:42.762964 systemd-logind[1569]: New session 8 of user core. Sep 4 00:03:42.776886 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 00:03:42.833531 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 00:03:42.833868 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:03:42.840204 sudo[1784]: pam_unix(sudo:session): session closed for user root Sep 4 00:03:42.848170 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 00:03:42.848577 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:03:42.859307 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:03:42.910572 augenrules[1806]: No rules Sep 4 00:03:42.912522 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:03:42.912838 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:03:42.914070 sudo[1783]: pam_unix(sudo:session): session closed for user root Sep 4 00:03:42.915808 sshd[1782]: Connection closed by 10.0.0.1 port 47778 Sep 4 00:03:42.916066 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:42.928572 systemd[1]: sshd@7-10.0.0.105:22-10.0.0.1:47778.service: Deactivated successfully. Sep 4 00:03:42.930417 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 00:03:42.931239 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Sep 4 00:03:42.934256 systemd[1]: Started sshd@8-10.0.0.105:22-10.0.0.1:47788.service - OpenSSH per-connection server daemon (10.0.0.1:47788). Sep 4 00:03:42.934867 systemd-logind[1569]: Removed session 8. Sep 4 00:03:42.994132 sshd[1815]: Accepted publickey for core from 10.0.0.1 port 47788 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:42.995947 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:43.000529 systemd-logind[1569]: New session 9 of user core. Sep 4 00:03:43.017049 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 00:03:43.071359 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 00:03:43.071717 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:03:43.195238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 00:03:43.197053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:43.467452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:43.488018 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:03:43.547550 kubelet[1845]: E0904 00:03:43.547461 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:03:43.554342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:03:43.554596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:03:43.555086 systemd[1]: kubelet.service: Consumed 266ms CPU time, 110.6M memory peak. Sep 4 00:03:43.637778 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 00:03:43.664226 (dockerd)[1855]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 00:03:44.304859 dockerd[1855]: time="2025-09-04T00:03:44.304762766Z" level=info msg="Starting up" Sep 4 00:03:44.306974 dockerd[1855]: time="2025-09-04T00:03:44.306938816Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 00:03:44.697352 dockerd[1855]: time="2025-09-04T00:03:44.697285526Z" level=info msg="Loading containers: start." Sep 4 00:03:44.708725 kernel: Initializing XFRM netlink socket Sep 4 00:03:44.960319 systemd-networkd[1492]: docker0: Link UP Sep 4 00:03:44.966382 dockerd[1855]: time="2025-09-04T00:03:44.966325256Z" level=info msg="Loading containers: done." Sep 4 00:03:44.984765 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2076007993-merged.mount: Deactivated successfully. Sep 4 00:03:44.986140 dockerd[1855]: time="2025-09-04T00:03:44.986074426Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 00:03:44.986234 dockerd[1855]: time="2025-09-04T00:03:44.986215550Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 4 00:03:44.986423 dockerd[1855]: time="2025-09-04T00:03:44.986397852Z" level=info msg="Initializing buildkit" Sep 4 00:03:45.022449 dockerd[1855]: time="2025-09-04T00:03:45.022371422Z" level=info msg="Completed buildkit initialization" Sep 4 00:03:45.028197 dockerd[1855]: time="2025-09-04T00:03:45.028149657Z" level=info msg="Daemon has completed initialization" Sep 4 00:03:45.028295 dockerd[1855]: time="2025-09-04T00:03:45.028228985Z" level=info msg="API listen on /run/docker.sock" Sep 4 00:03:45.028435 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 00:03:46.128866 containerd[1585]: time="2025-09-04T00:03:46.128810175Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 00:03:47.158588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount47026859.mount: Deactivated successfully. Sep 4 00:03:48.670415 containerd[1585]: time="2025-09-04T00:03:48.670101826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:48.675043 containerd[1585]: time="2025-09-04T00:03:48.674926602Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 4 00:03:48.676936 containerd[1585]: time="2025-09-04T00:03:48.676855760Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:48.684589 containerd[1585]: time="2025-09-04T00:03:48.684485926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:48.686052 containerd[1585]: time="2025-09-04T00:03:48.685614813Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.556756879s" Sep 4 00:03:48.686052 containerd[1585]: time="2025-09-04T00:03:48.685666229Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 00:03:48.686805 containerd[1585]: time="2025-09-04T00:03:48.686751394Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 00:03:52.768411 containerd[1585]: time="2025-09-04T00:03:52.767901419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:52.773363 containerd[1585]: time="2025-09-04T00:03:52.770818779Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 4 00:03:52.773571 containerd[1585]: time="2025-09-04T00:03:52.773046346Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:52.792317 containerd[1585]: time="2025-09-04T00:03:52.788412688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:52.792317 containerd[1585]: time="2025-09-04T00:03:52.791365795Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 4.104577431s" Sep 4 00:03:52.792317 containerd[1585]: time="2025-09-04T00:03:52.791797925Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 00:03:52.800281 containerd[1585]: time="2025-09-04T00:03:52.799939671Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 00:03:53.696208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 00:03:53.699424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:53.955421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:53.978275 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:03:54.042328 kubelet[2133]: E0904 00:03:54.042255 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:03:54.047383 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:03:54.047614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:03:54.048113 systemd[1]: kubelet.service: Consumed 262ms CPU time, 111.6M memory peak. Sep 4 00:03:55.163566 containerd[1585]: time="2025-09-04T00:03:55.163477684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:55.164359 containerd[1585]: time="2025-09-04T00:03:55.164292772Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 4 00:03:55.165597 containerd[1585]: time="2025-09-04T00:03:55.165544770Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:55.182433 containerd[1585]: time="2025-09-04T00:03:55.182376830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:55.183454 containerd[1585]: time="2025-09-04T00:03:55.183406681Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 2.383399654s" Sep 4 00:03:55.183454 containerd[1585]: time="2025-09-04T00:03:55.183445494Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 00:03:55.185772 containerd[1585]: time="2025-09-04T00:03:55.185743783Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 00:03:56.383125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629425486.mount: Deactivated successfully. Sep 4 00:03:56.813892 containerd[1585]: time="2025-09-04T00:03:56.813837729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:56.814753 containerd[1585]: time="2025-09-04T00:03:56.814705436Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 4 00:03:56.815891 containerd[1585]: time="2025-09-04T00:03:56.815858638Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:56.817677 containerd[1585]: time="2025-09-04T00:03:56.817643885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:56.818244 containerd[1585]: time="2025-09-04T00:03:56.818189158Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.632410509s" Sep 4 00:03:56.818275 containerd[1585]: time="2025-09-04T00:03:56.818242818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 00:03:56.818781 containerd[1585]: time="2025-09-04T00:03:56.818754087Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 00:03:57.320013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955448077.mount: Deactivated successfully. Sep 4 00:03:59.381864 containerd[1585]: time="2025-09-04T00:03:59.381648593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:59.384264 containerd[1585]: time="2025-09-04T00:03:59.384120769Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 00:03:59.391798 containerd[1585]: time="2025-09-04T00:03:59.389513620Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:59.405806 containerd[1585]: time="2025-09-04T00:03:59.401938156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:59.405806 containerd[1585]: time="2025-09-04T00:03:59.403707784Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.584897983s" Sep 4 00:03:59.405806 containerd[1585]: time="2025-09-04T00:03:59.405799967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 00:03:59.406423 containerd[1585]: time="2025-09-04T00:03:59.406365027Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 00:04:00.454588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040199751.mount: Deactivated successfully. Sep 4 00:04:00.473242 containerd[1585]: time="2025-09-04T00:04:00.473146341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:04:00.474187 containerd[1585]: time="2025-09-04T00:04:00.474151306Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 00:04:00.476889 containerd[1585]: time="2025-09-04T00:04:00.476268646Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:04:00.479594 containerd[1585]: time="2025-09-04T00:04:00.479537064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:04:00.480331 containerd[1585]: time="2025-09-04T00:04:00.480284024Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.073883201s" Sep 4 00:04:00.480331 containerd[1585]: time="2025-09-04T00:04:00.480324571Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 00:04:00.481054 containerd[1585]: time="2025-09-04T00:04:00.481009885Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 00:04:01.303005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273092743.mount: Deactivated successfully. Sep 4 00:04:04.195457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 00:04:04.199304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:04:04.547499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:04:04.551515 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:04:04.960162 kubelet[2270]: E0904 00:04:04.960075 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:04:04.964535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:04:04.964790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:04:04.965256 systemd[1]: kubelet.service: Consumed 295ms CPU time, 113.2M memory peak. Sep 4 00:04:05.220944 containerd[1585]: time="2025-09-04T00:04:05.220785737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:05.221621 containerd[1585]: time="2025-09-04T00:04:05.221571072Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 4 00:04:05.222765 containerd[1585]: time="2025-09-04T00:04:05.222740078Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:05.225373 containerd[1585]: time="2025-09-04T00:04:05.225338964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:05.226301 containerd[1585]: time="2025-09-04T00:04:05.226275160Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.745234526s" Sep 4 00:04:05.226340 containerd[1585]: time="2025-09-04T00:04:05.226305318Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 00:04:07.787386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:04:07.787628 systemd[1]: kubelet.service: Consumed 295ms CPU time, 113.2M memory peak. Sep 4 00:04:07.790152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:04:07.816366 systemd[1]: Reload requested from client PID 2310 ('systemctl') (unit session-9.scope)... Sep 4 00:04:07.816384 systemd[1]: Reloading... Sep 4 00:04:07.911749 zram_generator::config[2356]: No configuration found. Sep 4 00:04:08.093228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:04:08.215066 systemd[1]: Reloading finished in 398 ms. Sep 4 00:04:08.305657 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 00:04:08.305810 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 00:04:08.306183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:04:08.306243 systemd[1]: kubelet.service: Consumed 160ms CPU time, 98.2M memory peak. Sep 4 00:04:08.308237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:04:08.522150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:04:08.534101 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:04:08.574812 kubelet[2401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:04:08.574812 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 00:04:08.574812 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:04:08.576723 kubelet[2401]: I0904 00:04:08.574867 2401 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:04:08.814485 kubelet[2401]: I0904 00:04:08.814354 2401 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 00:04:08.814485 kubelet[2401]: I0904 00:04:08.814386 2401 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:04:08.814647 kubelet[2401]: I0904 00:04:08.814630 2401 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 00:04:08.834711 kubelet[2401]: E0904 00:04:08.834642 2401 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:04:08.835901 kubelet[2401]: I0904 00:04:08.835881 2401 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:04:08.843135 kubelet[2401]: I0904 00:04:08.843107 2401 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:04:08.849450 kubelet[2401]: I0904 00:04:08.849405 2401 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:04:08.850811 kubelet[2401]: I0904 00:04:08.850756 2401 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:04:08.850968 kubelet[2401]: I0904 00:04:08.850797 2401 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:04:08.851098 kubelet[2401]: I0904 00:04:08.850976 2401 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:04:08.851098 kubelet[2401]: I0904 00:04:08.850986 2401 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 00:04:08.851144 kubelet[2401]: I0904 00:04:08.851122 2401 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:04:08.854666 kubelet[2401]: I0904 00:04:08.854631 2401 kubelet.go:446] "Attempting to sync node with API server" Sep 4 00:04:08.854666 kubelet[2401]: I0904 00:04:08.854661 2401 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:04:08.854742 kubelet[2401]: I0904 00:04:08.854683 2401 kubelet.go:352] "Adding apiserver pod source" Sep 4 00:04:08.854742 kubelet[2401]: I0904 00:04:08.854708 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:04:08.857563 kubelet[2401]: I0904 00:04:08.857517 2401 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:04:08.857866 kubelet[2401]: I0904 00:04:08.857844 2401 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:04:08.858796 kubelet[2401]: W0904 00:04:08.858506 2401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 00:04:08.859465 kubelet[2401]: W0904 00:04:08.859407 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Sep 4 00:04:08.859465 kubelet[2401]: E0904 00:04:08.859454 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:04:08.859560 kubelet[2401]: W0904 00:04:08.859487 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Sep 4 00:04:08.859560 kubelet[2401]: E0904 00:04:08.859511 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:04:08.860706 kubelet[2401]: I0904 00:04:08.860668 2401 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 00:04:08.860749 kubelet[2401]: I0904 00:04:08.860716 2401 server.go:1287] "Started kubelet" Sep 4 00:04:08.860914 kubelet[2401]: I0904 00:04:08.860894 2401 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:04:08.861731 kubelet[2401]: I0904 00:04:08.861669 2401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:04:08.861993 kubelet[2401]: I0904 00:04:08.861971 2401 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:04:08.862624 kubelet[2401]: I0904 00:04:08.862582 2401 server.go:479] "Adding debug handlers to kubelet server" Sep 4 00:04:08.864732 kubelet[2401]: I0904 00:04:08.864051 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:04:08.864732 kubelet[2401]: I0904 00:04:08.864158 2401 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 00:04:08.864732 kubelet[2401]: I0904 00:04:08.864211 2401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:04:08.865867 kubelet[2401]: I0904 00:04:08.865842 2401 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 00:04:08.865906 kubelet[2401]: I0904 00:04:08.865887 2401 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:04:08.866412 kubelet[2401]: W0904 00:04:08.866379 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Sep 4 00:04:08.867719 kubelet[2401]: E0904 00:04:08.867417 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:04:08.867719 kubelet[2401]: E0904 00:04:08.867611 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:04:08.867890 kubelet[2401]: E0904 00:04:08.867867 2401 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:04:08.868161 kubelet[2401]: E0904 00:04:08.868127 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="200ms" Sep 4 00:04:08.868504 kubelet[2401]: I0904 00:04:08.868481 2401 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:04:08.868582 kubelet[2401]: I0904 00:04:08.868563 2401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:04:08.868780 kubelet[2401]: E0904 00:04:08.867748 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.105:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861eb7c5381aa4a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 00:04:08.860682826 +0000 UTC m=+0.322235548,LastTimestamp:2025-09-04 00:04:08.860682826 +0000 UTC m=+0.322235548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 00:04:08.870702 kubelet[2401]: I0904 00:04:08.869882 2401 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:04:08.883194 kubelet[2401]: I0904 00:04:08.883164 2401 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 00:04:08.883194 kubelet[2401]: I0904 00:04:08.883183 2401 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 00:04:08.883194 kubelet[2401]: I0904 00:04:08.883200 2401 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:04:08.885272 kubelet[2401]: I0904 00:04:08.885243 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:04:08.887153 kubelet[2401]: I0904 00:04:08.887130 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:04:08.887153 kubelet[2401]: I0904 00:04:08.887151 2401 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 00:04:08.887235 kubelet[2401]: I0904 00:04:08.887169 2401 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 00:04:08.887235 kubelet[2401]: I0904 00:04:08.887177 2401 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 00:04:08.887235 kubelet[2401]: E0904 00:04:08.887219 2401 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:04:08.887901 kubelet[2401]: W0904 00:04:08.887827 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Sep 4 00:04:08.887901 kubelet[2401]: E0904 00:04:08.887857 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:04:08.967975 kubelet[2401]: E0904 00:04:08.967884 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:04:08.988234 kubelet[2401]: E0904 00:04:08.988176 2401 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 00:04:09.068791 kubelet[2401]: E0904 00:04:09.068622 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:04:09.069073 kubelet[2401]: E0904 00:04:09.069043 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="400ms" Sep 4 00:04:09.169715 kubelet[2401]: E0904 00:04:09.169656 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:04:09.188939 kubelet[2401]: E0904 00:04:09.188879 2401 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 00:04:09.240413 kubelet[2401]: I0904 00:04:09.240360 2401 policy_none.go:49] "None policy: Start" Sep 4 00:04:09.240413 kubelet[2401]: I0904 00:04:09.240404 2401 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 00:04:09.240413 kubelet[2401]: I0904 00:04:09.240418 2401 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:04:09.247362 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 00:04:09.266463 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 00:04:09.270649 kubelet[2401]: E0904 00:04:09.270604 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:04:09.279727 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 00:04:09.281163 kubelet[2401]: I0904 00:04:09.281141 2401 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:04:09.281385 kubelet[2401]: I0904 00:04:09.281368 2401 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:04:09.281456 kubelet[2401]: I0904 00:04:09.281382 2401 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:04:09.281636 kubelet[2401]: I0904 00:04:09.281617 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:04:09.282682 kubelet[2401]: E0904 00:04:09.282621 2401 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 00:04:09.282682 kubelet[2401]: E0904 00:04:09.282664 2401 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 00:04:09.383436 kubelet[2401]: I0904 00:04:09.383298 2401 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 00:04:09.383753 kubelet[2401]: E0904 00:04:09.383722 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Sep 4 00:04:09.469712 kubelet[2401]: E0904 00:04:09.469633 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="800ms" Sep 4 00:04:09.586014 kubelet[2401]: I0904 00:04:09.585942 2401 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 00:04:09.586525 kubelet[2401]: E0904 00:04:09.586481 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Sep 4 00:04:09.598884 systemd[1]: Created slice kubepods-burstable-pod00b375b76b42c0f080367060fa452111.slice - libcontainer container kubepods-burstable-pod00b375b76b42c0f080367060fa452111.slice. Sep 4 00:04:09.620123 kubelet[2401]: E0904 00:04:09.620072 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 00:04:09.623164 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 4 00:04:09.642615 kubelet[2401]: E0904 00:04:09.642494 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 00:04:09.645537 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 4 00:04:09.647950 kubelet[2401]: E0904 00:04:09.647920 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 00:04:09.671532 kubelet[2401]: I0904 00:04:09.671479 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00b375b76b42c0f080367060fa452111-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"00b375b76b42c0f080367060fa452111\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:04:09.671532 kubelet[2401]: I0904 00:04:09.671530 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00b375b76b42c0f080367060fa452111-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"00b375b76b42c0f080367060fa452111\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:04:09.671737 kubelet[2401]: I0904 00:04:09.671564 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:09.671737 kubelet[2401]: I0904 00:04:09.671581 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00b375b76b42c0f080367060fa452111-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"00b375b76b42c0f080367060fa452111\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:04:09.671737 kubelet[2401]: I0904 00:04:09.671633 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:09.671737 kubelet[2401]: I0904 00:04:09.671670 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:09.671737 kubelet[2401]: I0904 00:04:09.671721 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:09.671922 kubelet[2401]: I0904 00:04:09.671740 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:09.671922 kubelet[2401]: I0904 00:04:09.671770 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 00:04:09.862933 kubelet[2401]: W0904 00:04:09.862844 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Sep 4 00:04:09.862933 kubelet[2401]: E0904 00:04:09.862924 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:04:09.921797 kubelet[2401]: E0904 00:04:09.921510 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:09.922487 containerd[1585]: time="2025-09-04T00:04:09.922423007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:00b375b76b42c0f080367060fa452111,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:09.943680 kubelet[2401]: E0904 00:04:09.943629 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:09.944332 containerd[1585]: time="2025-09-04T00:04:09.944274976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:09.948492 kubelet[2401]: E0904 00:04:09.948464 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:09.948882 containerd[1585]: time="2025-09-04T00:04:09.948833917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:09.960503 kubelet[2401]: W0904 00:04:09.960476 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Sep 4 00:04:09.960576 kubelet[2401]: E0904 00:04:09.960510 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:04:09.988505 kubelet[2401]: I0904 00:04:09.988466 2401 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 00:04:09.989022 kubelet[2401]: E0904 00:04:09.988938 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Sep 4 00:04:10.160556 kubelet[2401]: W0904 00:04:10.160464 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Sep 4 00:04:10.160556 kubelet[2401]: E0904 00:04:10.160542 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:04:10.199415 containerd[1585]: time="2025-09-04T00:04:10.199370473Z" level=info msg="connecting to shim 947686745b3af46372466537915b6d2ce699c3f181d0faa1824b0412d52f48d7" address="unix:///run/containerd/s/8ca24392a322ef1e3753a01ee4304b1695b38dbfc652ecfdf3e798312efe7d22" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:10.203026 containerd[1585]: time="2025-09-04T00:04:10.202925590Z" level=info msg="connecting to shim 5c8a8931de6a8fc7f4fd6e8ada4d2324fe02578da882749d2bcb211ec6ee9b84" address="unix:///run/containerd/s/0d9fcd124225a307623c06ede90c7c34e690fd1be8bdcc2c00b894c31de5e606" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:10.209852 containerd[1585]: time="2025-09-04T00:04:10.209790354Z" level=info msg="connecting to shim 6c22149750de2aa84fedf05203daae4ff38a4badbf02aca5bd32740b33c73fac" address="unix:///run/containerd/s/1ec6ca188adfea8392edc41d03b9aada01ea28d8dfa644ed1e766091bb838e8a" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:10.234877 systemd[1]: Started cri-containerd-947686745b3af46372466537915b6d2ce699c3f181d0faa1824b0412d52f48d7.scope - libcontainer container 947686745b3af46372466537915b6d2ce699c3f181d0faa1824b0412d52f48d7. Sep 4 00:04:10.239251 systemd[1]: Started cri-containerd-5c8a8931de6a8fc7f4fd6e8ada4d2324fe02578da882749d2bcb211ec6ee9b84.scope - libcontainer container 5c8a8931de6a8fc7f4fd6e8ada4d2324fe02578da882749d2bcb211ec6ee9b84. Sep 4 00:04:10.240832 systemd[1]: Started cri-containerd-6c22149750de2aa84fedf05203daae4ff38a4badbf02aca5bd32740b33c73fac.scope - libcontainer container 6c22149750de2aa84fedf05203daae4ff38a4badbf02aca5bd32740b33c73fac. Sep 4 00:04:10.270665 kubelet[2401]: E0904 00:04:10.270594 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="1.6s" Sep 4 00:04:10.274528 kubelet[2401]: W0904 00:04:10.274465 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Sep 4 00:04:10.274598 kubelet[2401]: E0904 00:04:10.274533 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:04:10.289137 containerd[1585]: time="2025-09-04T00:04:10.289015490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:00b375b76b42c0f080367060fa452111,Namespace:kube-system,Attempt:0,} returns sandbox id \"947686745b3af46372466537915b6d2ce699c3f181d0faa1824b0412d52f48d7\"" Sep 4 00:04:10.290503 kubelet[2401]: E0904 00:04:10.290471 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:10.292413 containerd[1585]: time="2025-09-04T00:04:10.292383839Z" level=info msg="CreateContainer within sandbox \"947686745b3af46372466537915b6d2ce699c3f181d0faa1824b0412d52f48d7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 00:04:10.302925 containerd[1585]: time="2025-09-04T00:04:10.302875066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c8a8931de6a8fc7f4fd6e8ada4d2324fe02578da882749d2bcb211ec6ee9b84\"" Sep 4 00:04:10.303448 kubelet[2401]: E0904 00:04:10.303426 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:10.305228 containerd[1585]: time="2025-09-04T00:04:10.305201902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c22149750de2aa84fedf05203daae4ff38a4badbf02aca5bd32740b33c73fac\"" Sep 4 00:04:10.305552 containerd[1585]: time="2025-09-04T00:04:10.305514821Z" level=info msg="CreateContainer within sandbox \"5c8a8931de6a8fc7f4fd6e8ada4d2324fe02578da882749d2bcb211ec6ee9b84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 00:04:10.306103 kubelet[2401]: E0904 00:04:10.306076 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:10.307890 containerd[1585]: time="2025-09-04T00:04:10.307828541Z" level=info msg="CreateContainer within sandbox \"6c22149750de2aa84fedf05203daae4ff38a4badbf02aca5bd32740b33c73fac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 00:04:10.308145 containerd[1585]: time="2025-09-04T00:04:10.308117384Z" level=info msg="Container 83df9f82380e6b7bc65c69ff39a808d5c9d257622e6d1a983caa70644c83316d: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:10.316727 containerd[1585]: time="2025-09-04T00:04:10.316523569Z" level=info msg="Container c202b2d5dace567ea4651d3e4d36290f2d294a8fcaefd101a408f7e83c4842e0: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:10.317535 containerd[1585]: time="2025-09-04T00:04:10.317501622Z" level=info msg="CreateContainer within sandbox \"947686745b3af46372466537915b6d2ce699c3f181d0faa1824b0412d52f48d7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"83df9f82380e6b7bc65c69ff39a808d5c9d257622e6d1a983caa70644c83316d\"" Sep 4 00:04:10.318228 containerd[1585]: time="2025-09-04T00:04:10.318188528Z" level=info msg="StartContainer for \"83df9f82380e6b7bc65c69ff39a808d5c9d257622e6d1a983caa70644c83316d\"" Sep 4 00:04:10.319667 containerd[1585]: time="2025-09-04T00:04:10.319626912Z" level=info msg="connecting to shim 83df9f82380e6b7bc65c69ff39a808d5c9d257622e6d1a983caa70644c83316d" address="unix:///run/containerd/s/8ca24392a322ef1e3753a01ee4304b1695b38dbfc652ecfdf3e798312efe7d22" protocol=ttrpc version=3 Sep 4 00:04:10.323719 containerd[1585]: time="2025-09-04T00:04:10.323144797Z" level=info msg="Container 0dad8d8343acc983253147d43e80060003adf9711b90fabf5ac99d8117fccc4b: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:10.334074 containerd[1585]: time="2025-09-04T00:04:10.334017095Z" level=info msg="CreateContainer within sandbox \"5c8a8931de6a8fc7f4fd6e8ada4d2324fe02578da882749d2bcb211ec6ee9b84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c202b2d5dace567ea4651d3e4d36290f2d294a8fcaefd101a408f7e83c4842e0\"" Sep 4 00:04:10.334517 containerd[1585]: time="2025-09-04T00:04:10.334488458Z" level=info msg="StartContainer for \"c202b2d5dace567ea4651d3e4d36290f2d294a8fcaefd101a408f7e83c4842e0\"" Sep 4 00:04:10.336046 containerd[1585]: time="2025-09-04T00:04:10.335907254Z" level=info msg="connecting to shim c202b2d5dace567ea4651d3e4d36290f2d294a8fcaefd101a408f7e83c4842e0" address="unix:///run/containerd/s/0d9fcd124225a307623c06ede90c7c34e690fd1be8bdcc2c00b894c31de5e606" protocol=ttrpc version=3 Sep 4 00:04:10.336651 containerd[1585]: time="2025-09-04T00:04:10.336626582Z" level=info msg="CreateContainer within sandbox \"6c22149750de2aa84fedf05203daae4ff38a4badbf02aca5bd32740b33c73fac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0dad8d8343acc983253147d43e80060003adf9711b90fabf5ac99d8117fccc4b\"" Sep 4 00:04:10.337315 containerd[1585]: time="2025-09-04T00:04:10.337290523Z" level=info msg="StartContainer for \"0dad8d8343acc983253147d43e80060003adf9711b90fabf5ac99d8117fccc4b\"" Sep 4 00:04:10.338319 containerd[1585]: time="2025-09-04T00:04:10.338298302Z" level=info msg="connecting to shim 0dad8d8343acc983253147d43e80060003adf9711b90fabf5ac99d8117fccc4b" address="unix:///run/containerd/s/1ec6ca188adfea8392edc41d03b9aada01ea28d8dfa644ed1e766091bb838e8a" protocol=ttrpc version=3 Sep 4 00:04:10.342860 systemd[1]: Started cri-containerd-83df9f82380e6b7bc65c69ff39a808d5c9d257622e6d1a983caa70644c83316d.scope - libcontainer container 83df9f82380e6b7bc65c69ff39a808d5c9d257622e6d1a983caa70644c83316d. Sep 4 00:04:10.359842 systemd[1]: Started cri-containerd-c202b2d5dace567ea4651d3e4d36290f2d294a8fcaefd101a408f7e83c4842e0.scope - libcontainer container c202b2d5dace567ea4651d3e4d36290f2d294a8fcaefd101a408f7e83c4842e0. Sep 4 00:04:10.374845 systemd[1]: Started cri-containerd-0dad8d8343acc983253147d43e80060003adf9711b90fabf5ac99d8117fccc4b.scope - libcontainer container 0dad8d8343acc983253147d43e80060003adf9711b90fabf5ac99d8117fccc4b. Sep 4 00:04:10.421271 containerd[1585]: time="2025-09-04T00:04:10.421137998Z" level=info msg="StartContainer for \"83df9f82380e6b7bc65c69ff39a808d5c9d257622e6d1a983caa70644c83316d\" returns successfully" Sep 4 00:04:10.426102 containerd[1585]: time="2025-09-04T00:04:10.426046166Z" level=info msg="StartContainer for \"c202b2d5dace567ea4651d3e4d36290f2d294a8fcaefd101a408f7e83c4842e0\" returns successfully" Sep 4 00:04:10.454392 containerd[1585]: time="2025-09-04T00:04:10.453631955Z" level=info msg="StartContainer for \"0dad8d8343acc983253147d43e80060003adf9711b90fabf5ac99d8117fccc4b\" returns successfully" Sep 4 00:04:10.791604 kubelet[2401]: I0904 00:04:10.791210 2401 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 00:04:10.897714 kubelet[2401]: E0904 00:04:10.897615 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 00:04:10.898862 kubelet[2401]: E0904 00:04:10.898797 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:10.901847 kubelet[2401]: E0904 00:04:10.901761 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 00:04:10.902303 kubelet[2401]: E0904 00:04:10.902285 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:10.903184 kubelet[2401]: E0904 00:04:10.903154 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 00:04:10.903260 kubelet[2401]: E0904 00:04:10.903239 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:11.906212 kubelet[2401]: E0904 00:04:11.906138 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 00:04:11.906836 kubelet[2401]: E0904 00:04:11.906472 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:11.906836 kubelet[2401]: E0904 00:04:11.906757 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 00:04:11.906947 kubelet[2401]: E0904 00:04:11.906848 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:12.136485 kubelet[2401]: E0904 00:04:12.136414 2401 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 00:04:12.233221 kubelet[2401]: I0904 00:04:12.233172 2401 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 00:04:12.268296 kubelet[2401]: I0904 00:04:12.268251 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 00:04:12.274785 kubelet[2401]: E0904 00:04:12.274753 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 00:04:12.274785 kubelet[2401]: I0904 00:04:12.274774 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:12.276385 kubelet[2401]: E0904 00:04:12.276352 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:12.276385 kubelet[2401]: I0904 00:04:12.276367 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 00:04:12.277475 kubelet[2401]: E0904 00:04:12.277455 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 00:04:12.858876 kubelet[2401]: I0904 00:04:12.858796 2401 apiserver.go:52] "Watching apiserver" Sep 4 00:04:12.866413 kubelet[2401]: I0904 00:04:12.866354 2401 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 00:04:12.906893 kubelet[2401]: I0904 00:04:12.906855 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 00:04:12.909915 kubelet[2401]: E0904 00:04:12.909539 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 00:04:12.909915 kubelet[2401]: E0904 00:04:12.909732 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:14.808733 update_engine[1573]: I20250904 00:04:14.808192 1573 update_attempter.cc:509] Updating boot flags... Sep 4 00:04:16.018249 systemd[1]: Reload requested from client PID 2695 ('systemctl') (unit session-9.scope)... Sep 4 00:04:16.018273 systemd[1]: Reloading... Sep 4 00:04:16.155922 zram_generator::config[2738]: No configuration found. Sep 4 00:04:16.325884 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:04:16.510628 systemd[1]: Reloading finished in 491 ms. Sep 4 00:04:16.545605 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:04:16.559786 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 00:04:16.560296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:04:16.560379 systemd[1]: kubelet.service: Consumed 1.012s CPU time, 133.8M memory peak. Sep 4 00:04:16.563477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:04:16.778009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:04:16.795249 (kubelet)[2783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:04:16.840584 kubelet[2783]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:04:16.840584 kubelet[2783]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 00:04:16.840584 kubelet[2783]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:04:16.841020 kubelet[2783]: I0904 00:04:16.840621 2783 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:04:16.847705 kubelet[2783]: I0904 00:04:16.847430 2783 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 00:04:16.847705 kubelet[2783]: I0904 00:04:16.847457 2783 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:04:16.847872 kubelet[2783]: I0904 00:04:16.847855 2783 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 00:04:16.849765 kubelet[2783]: I0904 00:04:16.849637 2783 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 00:04:16.851964 kubelet[2783]: I0904 00:04:16.851918 2783 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:04:16.855472 kubelet[2783]: I0904 00:04:16.855452 2783 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:04:16.860355 kubelet[2783]: I0904 00:04:16.860313 2783 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:04:16.860566 kubelet[2783]: I0904 00:04:16.860525 2783 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:04:16.860737 kubelet[2783]: I0904 00:04:16.860556 2783 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:04:16.860737 kubelet[2783]: I0904 00:04:16.860736 2783 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:04:16.860873 kubelet[2783]: I0904 00:04:16.860745 2783 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 00:04:16.860873 kubelet[2783]: I0904 00:04:16.860797 2783 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:04:16.860955 kubelet[2783]: I0904 00:04:16.860940 2783 kubelet.go:446] "Attempting to sync node with API server" Sep 4 00:04:16.860982 kubelet[2783]: I0904 00:04:16.860965 2783 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:04:16.861016 kubelet[2783]: I0904 00:04:16.860985 2783 kubelet.go:352] "Adding apiserver pod source" Sep 4 00:04:16.861016 kubelet[2783]: I0904 00:04:16.860995 2783 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:04:16.861705 kubelet[2783]: I0904 00:04:16.861425 2783 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:04:16.861789 kubelet[2783]: I0904 00:04:16.861759 2783 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:04:16.863516 kubelet[2783]: I0904 00:04:16.863498 2783 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 00:04:16.863616 kubelet[2783]: I0904 00:04:16.863604 2783 server.go:1287] "Started kubelet" Sep 4 00:04:16.867428 kubelet[2783]: I0904 00:04:16.867360 2783 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:04:16.867747 kubelet[2783]: I0904 00:04:16.867683 2783 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:04:16.868779 kubelet[2783]: I0904 00:04:16.868405 2783 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:04:16.869593 kubelet[2783]: I0904 00:04:16.869568 2783 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:04:16.871896 kubelet[2783]: E0904 00:04:16.871854 2783 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:04:16.873139 kubelet[2783]: I0904 00:04:16.873109 2783 server.go:479] "Adding debug handlers to kubelet server" Sep 4 00:04:16.873985 kubelet[2783]: I0904 00:04:16.873961 2783 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:04:16.876848 kubelet[2783]: I0904 00:04:16.876829 2783 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 00:04:16.877083 kubelet[2783]: E0904 00:04:16.877065 2783 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:04:16.877422 kubelet[2783]: I0904 00:04:16.877408 2783 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 00:04:16.877635 kubelet[2783]: I0904 00:04:16.877622 2783 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:04:16.877706 kubelet[2783]: I0904 00:04:16.877667 2783 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:04:16.877860 kubelet[2783]: I0904 00:04:16.877840 2783 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:04:16.879554 kubelet[2783]: I0904 00:04:16.879525 2783 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:04:16.881070 kubelet[2783]: I0904 00:04:16.881020 2783 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:04:16.882789 kubelet[2783]: I0904 00:04:16.882762 2783 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:04:16.882836 kubelet[2783]: I0904 00:04:16.882793 2783 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 00:04:16.882836 kubelet[2783]: I0904 00:04:16.882812 2783 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 00:04:16.882836 kubelet[2783]: I0904 00:04:16.882820 2783 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 00:04:16.882907 kubelet[2783]: E0904 00:04:16.882863 2783 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:04:16.924341 kubelet[2783]: I0904 00:04:16.924288 2783 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 00:04:16.924341 kubelet[2783]: I0904 00:04:16.924309 2783 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 00:04:16.924341 kubelet[2783]: I0904 00:04:16.924330 2783 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:04:16.924556 kubelet[2783]: I0904 00:04:16.924483 2783 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 00:04:16.924556 kubelet[2783]: I0904 00:04:16.924494 2783 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 00:04:16.924556 kubelet[2783]: I0904 00:04:16.924512 2783 policy_none.go:49] "None policy: Start" Sep 4 00:04:16.924556 kubelet[2783]: I0904 00:04:16.924522 2783 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 00:04:16.924556 kubelet[2783]: I0904 00:04:16.924533 2783 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:04:16.924671 kubelet[2783]: I0904 00:04:16.924620 2783 state_mem.go:75] "Updated machine memory state" Sep 4 00:04:16.929249 kubelet[2783]: I0904 00:04:16.929202 2783 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:04:16.929411 kubelet[2783]: I0904 00:04:16.929373 2783 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:04:16.929411 kubelet[2783]: I0904 00:04:16.929384 2783 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:04:16.929588 kubelet[2783]: I0904 00:04:16.929573 2783 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:04:16.931747 kubelet[2783]: E0904 00:04:16.931089 2783 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 00:04:16.984188 sudo[2816]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 00:04:16.984573 kubelet[2783]: I0904 00:04:16.984200 2783 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 00:04:16.984573 kubelet[2783]: I0904 00:04:16.984276 2783 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 00:04:16.984573 kubelet[2783]: I0904 00:04:16.984317 2783 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:16.985122 sudo[2816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 00:04:17.035993 kubelet[2783]: I0904 00:04:17.035888 2783 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 00:04:17.078610 kubelet[2783]: I0904 00:04:17.078544 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00b375b76b42c0f080367060fa452111-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"00b375b76b42c0f080367060fa452111\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:04:17.078610 kubelet[2783]: I0904 00:04:17.078587 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:17.078610 kubelet[2783]: I0904 00:04:17.078604 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:17.078610 kubelet[2783]: I0904 00:04:17.078621 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:17.078899 kubelet[2783]: I0904 00:04:17.078636 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 00:04:17.078899 kubelet[2783]: I0904 00:04:17.078650 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00b375b76b42c0f080367060fa452111-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"00b375b76b42c0f080367060fa452111\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:04:17.078899 kubelet[2783]: I0904 00:04:17.078663 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:17.078899 kubelet[2783]: I0904 00:04:17.078678 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:17.078899 kubelet[2783]: I0904 00:04:17.078707 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00b375b76b42c0f080367060fa452111-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"00b375b76b42c0f080367060fa452111\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:04:17.167885 kubelet[2783]: I0904 00:04:17.167835 2783 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 4 00:04:17.168075 kubelet[2783]: I0904 00:04:17.167939 2783 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 00:04:17.303239 kubelet[2783]: E0904 00:04:17.292924 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:17.303239 kubelet[2783]: E0904 00:04:17.293387 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:17.303239 kubelet[2783]: E0904 00:04:17.293504 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:17.718368 sudo[2816]: pam_unix(sudo:session): session closed for user root Sep 4 00:04:17.862344 kubelet[2783]: I0904 00:04:17.862260 2783 apiserver.go:52] "Watching apiserver" Sep 4 00:04:17.878080 kubelet[2783]: I0904 00:04:17.877992 2783 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 00:04:17.903770 kubelet[2783]: E0904 00:04:17.903714 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:17.903975 kubelet[2783]: E0904 00:04:17.903955 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:17.904209 kubelet[2783]: E0904 00:04:17.904175 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:17.937653 kubelet[2783]: I0904 00:04:17.937574 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.937503808 podStartE2EDuration="1.937503808s" podCreationTimestamp="2025-09-04 00:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:17.929565383 +0000 UTC m=+1.130028651" watchObservedRunningTime="2025-09-04 00:04:17.937503808 +0000 UTC m=+1.137967076" Sep 4 00:04:17.938337 kubelet[2783]: I0904 00:04:17.938279 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.938271507 podStartE2EDuration="1.938271507s" podCreationTimestamp="2025-09-04 00:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:17.937444907 +0000 UTC m=+1.137908185" watchObservedRunningTime="2025-09-04 00:04:17.938271507 +0000 UTC m=+1.138734775" Sep 4 00:04:18.905221 kubelet[2783]: E0904 00:04:18.905179 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:18.905721 kubelet[2783]: E0904 00:04:18.905280 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:19.085607 sudo[1818]: pam_unix(sudo:session): session closed for user root Sep 4 00:04:19.087167 sshd[1817]: Connection closed by 10.0.0.1 port 47788 Sep 4 00:04:19.087636 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:19.092282 systemd[1]: sshd@8-10.0.0.105:22-10.0.0.1:47788.service: Deactivated successfully. Sep 4 00:04:19.094922 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 00:04:19.095220 systemd[1]: session-9.scope: Consumed 5.334s CPU time, 260.1M memory peak. Sep 4 00:04:19.096623 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Sep 4 00:04:19.098386 systemd-logind[1569]: Removed session 9. Sep 4 00:04:19.851280 kubelet[2783]: I0904 00:04:19.851208 2783 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 00:04:19.851620 containerd[1585]: time="2025-09-04T00:04:19.851572065Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 00:04:19.852287 kubelet[2783]: I0904 00:04:19.851777 2783 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 00:04:19.907384 kubelet[2783]: E0904 00:04:19.907341 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:20.227475 kubelet[2783]: E0904 00:04:20.227408 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:20.240370 kubelet[2783]: I0904 00:04:20.240297 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.240278049 podStartE2EDuration="4.240278049s" podCreationTimestamp="2025-09-04 00:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:17.948406928 +0000 UTC m=+1.148870206" watchObservedRunningTime="2025-09-04 00:04:20.240278049 +0000 UTC m=+3.440741348" Sep 4 00:04:20.908632 kubelet[2783]: E0904 00:04:20.908527 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:21.023871 kubelet[2783]: I0904 00:04:21.023775 2783 status_manager.go:890] "Failed to get status for pod" podUID="a7ac77fa-11db-4e2b-8600-db01632cd90a" pod="kube-system/kube-proxy-f6fkb" err="pods \"kube-proxy-f6fkb\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 4 00:04:21.032957 systemd[1]: Created slice kubepods-besteffort-poda7ac77fa_11db_4e2b_8600_db01632cd90a.slice - libcontainer container kubepods-besteffort-poda7ac77fa_11db_4e2b_8600_db01632cd90a.slice. Sep 4 00:04:21.050114 systemd[1]: Created slice kubepods-burstable-podab33ad0f_9561_4a15_a7bd_d964794c3b10.slice - libcontainer container kubepods-burstable-podab33ad0f_9561_4a15_a7bd_d964794c3b10.slice. Sep 4 00:04:21.059172 systemd[1]: Created slice kubepods-besteffort-pod5bb845bd_a327_4339_bbb4_cf32dba7a170.slice - libcontainer container kubepods-besteffort-pod5bb845bd_a327_4339_bbb4_cf32dba7a170.slice. Sep 4 00:04:21.114591 kubelet[2783]: I0904 00:04:21.114500 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7ac77fa-11db-4e2b-8600-db01632cd90a-xtables-lock\") pod \"kube-proxy-f6fkb\" (UID: \"a7ac77fa-11db-4e2b-8600-db01632cd90a\") " pod="kube-system/kube-proxy-f6fkb" Sep 4 00:04:21.114591 kubelet[2783]: I0904 00:04:21.114571 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cilium-run\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.114591 kubelet[2783]: I0904 00:04:21.114598 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cni-path\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.114885 kubelet[2783]: I0904 00:04:21.114621 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl7nt\" (UniqueName: \"kubernetes.io/projected/ab33ad0f-9561-4a15-a7bd-d964794c3b10-kube-api-access-xl7nt\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.114885 kubelet[2783]: I0904 00:04:21.114646 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7ac77fa-11db-4e2b-8600-db01632cd90a-lib-modules\") pod \"kube-proxy-f6fkb\" (UID: \"a7ac77fa-11db-4e2b-8600-db01632cd90a\") " pod="kube-system/kube-proxy-f6fkb" Sep 4 00:04:21.114885 kubelet[2783]: I0904 00:04:21.114675 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab33ad0f-9561-4a15-a7bd-d964794c3b10-clustermesh-secrets\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.114885 kubelet[2783]: I0904 00:04:21.114725 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab33ad0f-9561-4a15-a7bd-d964794c3b10-hubble-tls\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.114885 kubelet[2783]: I0904 00:04:21.114748 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bb845bd-a327-4339-bbb4-cf32dba7a170-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rznks\" (UID: \"5bb845bd-a327-4339-bbb4-cf32dba7a170\") " pod="kube-system/cilium-operator-6c4d7847fc-rznks" Sep 4 00:04:21.115094 kubelet[2783]: I0904 00:04:21.114767 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-bpf-maps\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.115094 kubelet[2783]: I0904 00:04:21.114796 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-etc-cni-netd\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.115094 kubelet[2783]: I0904 00:04:21.114817 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-lib-modules\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.115094 kubelet[2783]: I0904 00:04:21.114849 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cilium-config-path\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.115094 kubelet[2783]: I0904 00:04:21.114917 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cilium-cgroup\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.115094 kubelet[2783]: I0904 00:04:21.114977 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xv49\" (UniqueName: \"kubernetes.io/projected/a7ac77fa-11db-4e2b-8600-db01632cd90a-kube-api-access-7xv49\") pod \"kube-proxy-f6fkb\" (UID: \"a7ac77fa-11db-4e2b-8600-db01632cd90a\") " pod="kube-system/kube-proxy-f6fkb" Sep 4 00:04:21.115288 kubelet[2783]: I0904 00:04:21.115004 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-hostproc\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.115288 kubelet[2783]: I0904 00:04:21.115045 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-host-proc-sys-net\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.115288 kubelet[2783]: I0904 00:04:21.115076 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-host-proc-sys-kernel\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.115288 kubelet[2783]: I0904 00:04:21.115103 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-xtables-lock\") pod \"cilium-k7bt6\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " pod="kube-system/cilium-k7bt6" Sep 4 00:04:21.115288 kubelet[2783]: I0904 00:04:21.115127 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a7ac77fa-11db-4e2b-8600-db01632cd90a-kube-proxy\") pod \"kube-proxy-f6fkb\" (UID: \"a7ac77fa-11db-4e2b-8600-db01632cd90a\") " pod="kube-system/kube-proxy-f6fkb" Sep 4 00:04:21.115429 kubelet[2783]: I0904 00:04:21.115169 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sct69\" (UniqueName: \"kubernetes.io/projected/5bb845bd-a327-4339-bbb4-cf32dba7a170-kube-api-access-sct69\") pod \"cilium-operator-6c4d7847fc-rznks\" (UID: \"5bb845bd-a327-4339-bbb4-cf32dba7a170\") " pod="kube-system/cilium-operator-6c4d7847fc-rznks" Sep 4 00:04:21.343726 kubelet[2783]: E0904 00:04:21.343385 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:21.348591 containerd[1585]: time="2025-09-04T00:04:21.348003184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f6fkb,Uid:a7ac77fa-11db-4e2b-8600-db01632cd90a,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:21.358597 kubelet[2783]: E0904 00:04:21.358536 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:21.359878 containerd[1585]: time="2025-09-04T00:04:21.359286251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k7bt6,Uid:ab33ad0f-9561-4a15-a7bd-d964794c3b10,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:21.367255 kubelet[2783]: E0904 00:04:21.366922 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:21.368734 containerd[1585]: time="2025-09-04T00:04:21.368365572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rznks,Uid:5bb845bd-a327-4339-bbb4-cf32dba7a170,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:21.464153 containerd[1585]: time="2025-09-04T00:04:21.464064627Z" level=info msg="connecting to shim 797400924b759c96a01921cb49d7552a639961e97844b578f2a232611dd746e7" address="unix:///run/containerd/s/d543a9d5604228834d8ca543d3223a8c55a8680985c505861de9e8d2630fb328" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:21.530103 systemd[1]: Started cri-containerd-797400924b759c96a01921cb49d7552a639961e97844b578f2a232611dd746e7.scope - libcontainer container 797400924b759c96a01921cb49d7552a639961e97844b578f2a232611dd746e7. Sep 4 00:04:21.739978 containerd[1585]: time="2025-09-04T00:04:21.739916528Z" level=info msg="connecting to shim 9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078" address="unix:///run/containerd/s/efa4abd54a8907756fc3f55dc826b2a3b887e14e30e72f08cc6f8aab5897d125" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:21.769873 systemd[1]: Started cri-containerd-9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078.scope - libcontainer container 9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078. Sep 4 00:04:21.793418 containerd[1585]: time="2025-09-04T00:04:21.793367292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f6fkb,Uid:a7ac77fa-11db-4e2b-8600-db01632cd90a,Namespace:kube-system,Attempt:0,} returns sandbox id \"797400924b759c96a01921cb49d7552a639961e97844b578f2a232611dd746e7\"" Sep 4 00:04:21.795430 kubelet[2783]: E0904 00:04:21.795390 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:21.798821 containerd[1585]: time="2025-09-04T00:04:21.798779236Z" level=info msg="CreateContainer within sandbox \"797400924b759c96a01921cb49d7552a639961e97844b578f2a232611dd746e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 00:04:21.813278 containerd[1585]: time="2025-09-04T00:04:21.813201409Z" level=info msg="Container 84a7a699e1d480e90779c36fed4663f0030b215e16350b7d0873c13dbe118465: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:21.819897 containerd[1585]: time="2025-09-04T00:04:21.819835928Z" level=info msg="connecting to shim fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845" address="unix:///run/containerd/s/502937af64403583500964cd30472d8dcc7ad09780235d3864c2620943a30998" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:21.823951 containerd[1585]: time="2025-09-04T00:04:21.823901932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rznks,Uid:5bb845bd-a327-4339-bbb4-cf32dba7a170,Namespace:kube-system,Attempt:0,} returns sandbox id \"9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078\"" Sep 4 00:04:21.825614 containerd[1585]: time="2025-09-04T00:04:21.825572236Z" level=info msg="CreateContainer within sandbox \"797400924b759c96a01921cb49d7552a639961e97844b578f2a232611dd746e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84a7a699e1d480e90779c36fed4663f0030b215e16350b7d0873c13dbe118465\"" Sep 4 00:04:21.826401 kubelet[2783]: E0904 00:04:21.826361 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:21.827096 containerd[1585]: time="2025-09-04T00:04:21.827077558Z" level=info msg="StartContainer for \"84a7a699e1d480e90779c36fed4663f0030b215e16350b7d0873c13dbe118465\"" Sep 4 00:04:21.828542 containerd[1585]: time="2025-09-04T00:04:21.828517836Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 00:04:21.832761 containerd[1585]: time="2025-09-04T00:04:21.832725808Z" level=info msg="connecting to shim 84a7a699e1d480e90779c36fed4663f0030b215e16350b7d0873c13dbe118465" address="unix:///run/containerd/s/d543a9d5604228834d8ca543d3223a8c55a8680985c505861de9e8d2630fb328" protocol=ttrpc version=3 Sep 4 00:04:21.849883 systemd[1]: Started cri-containerd-fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845.scope - libcontainer container fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845. Sep 4 00:04:21.868835 systemd[1]: Started cri-containerd-84a7a699e1d480e90779c36fed4663f0030b215e16350b7d0873c13dbe118465.scope - libcontainer container 84a7a699e1d480e90779c36fed4663f0030b215e16350b7d0873c13dbe118465. Sep 4 00:04:21.916876 containerd[1585]: time="2025-09-04T00:04:21.916815399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k7bt6,Uid:ab33ad0f-9561-4a15-a7bd-d964794c3b10,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\"" Sep 4 00:04:21.917577 kubelet[2783]: E0904 00:04:21.917531 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:22.173503 containerd[1585]: time="2025-09-04T00:04:22.173340795Z" level=info msg="StartContainer for \"84a7a699e1d480e90779c36fed4663f0030b215e16350b7d0873c13dbe118465\" returns successfully" Sep 4 00:04:22.920570 kubelet[2783]: E0904 00:04:22.920523 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:23.339000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764367157.mount: Deactivated successfully. Sep 4 00:04:23.923959 kubelet[2783]: E0904 00:04:23.923919 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:23.993658 containerd[1585]: time="2025-09-04T00:04:23.993590627Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:23.994404 containerd[1585]: time="2025-09-04T00:04:23.994369700Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 00:04:23.995721 containerd[1585]: time="2025-09-04T00:04:23.995585671Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:23.996957 containerd[1585]: time="2025-09-04T00:04:23.996912051Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.168273787s" Sep 4 00:04:23.996957 containerd[1585]: time="2025-09-04T00:04:23.996947539Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 00:04:23.997986 containerd[1585]: time="2025-09-04T00:04:23.997955415Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 00:04:23.999816 containerd[1585]: time="2025-09-04T00:04:23.999753899Z" level=info msg="CreateContainer within sandbox \"9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 00:04:24.009674 containerd[1585]: time="2025-09-04T00:04:24.009624216Z" level=info msg="Container 2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:24.016413 containerd[1585]: time="2025-09-04T00:04:24.016375864Z" level=info msg="CreateContainer within sandbox \"9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\"" Sep 4 00:04:24.017075 containerd[1585]: time="2025-09-04T00:04:24.016904024Z" level=info msg="StartContainer for \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\"" Sep 4 00:04:24.017826 containerd[1585]: time="2025-09-04T00:04:24.017801310Z" level=info msg="connecting to shim 2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189" address="unix:///run/containerd/s/efa4abd54a8907756fc3f55dc826b2a3b887e14e30e72f08cc6f8aab5897d125" protocol=ttrpc version=3 Sep 4 00:04:24.047968 systemd[1]: Started cri-containerd-2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189.scope - libcontainer container 2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189. Sep 4 00:04:24.080365 containerd[1585]: time="2025-09-04T00:04:24.080314148Z" level=info msg="StartContainer for \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\" returns successfully" Sep 4 00:04:24.929512 kubelet[2783]: E0904 00:04:24.929462 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:24.986424 kubelet[2783]: I0904 00:04:24.986341 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f6fkb" podStartSLOduration=4.986320011 podStartE2EDuration="4.986320011s" podCreationTimestamp="2025-09-04 00:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:22.931026622 +0000 UTC m=+6.131489890" watchObservedRunningTime="2025-09-04 00:04:24.986320011 +0000 UTC m=+8.186783279" Sep 4 00:04:25.930862 kubelet[2783]: E0904 00:04:25.930811 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:25.950009 kubelet[2783]: E0904 00:04:25.949955 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:25.969221 kubelet[2783]: I0904 00:04:25.969149 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rznks" podStartSLOduration=2.799373522 podStartE2EDuration="4.969128112s" podCreationTimestamp="2025-09-04 00:04:21 +0000 UTC" firstStartedPulling="2025-09-04 00:04:21.828059849 +0000 UTC m=+5.028523117" lastFinishedPulling="2025-09-04 00:04:23.997814439 +0000 UTC m=+7.198277707" observedRunningTime="2025-09-04 00:04:24.987168406 +0000 UTC m=+8.187631674" watchObservedRunningTime="2025-09-04 00:04:25.969128112 +0000 UTC m=+9.169591380" Sep 4 00:04:26.932381 kubelet[2783]: E0904 00:04:26.932292 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:27.946492 kubelet[2783]: E0904 00:04:27.946437 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:29.500100 kubelet[2783]: E0904 00:04:29.500004 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:29.950624 kubelet[2783]: E0904 00:04:29.950580 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:37.740039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2742734242.mount: Deactivated successfully. Sep 4 00:04:41.287510 containerd[1585]: time="2025-09-04T00:04:41.287443872Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:41.288234 containerd[1585]: time="2025-09-04T00:04:41.288195886Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 00:04:41.289274 containerd[1585]: time="2025-09-04T00:04:41.289229880Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:41.290531 containerd[1585]: time="2025-09-04T00:04:41.290496883Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.29250625s" Sep 4 00:04:41.290595 containerd[1585]: time="2025-09-04T00:04:41.290532319Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 00:04:41.292558 containerd[1585]: time="2025-09-04T00:04:41.292531108Z" level=info msg="CreateContainer within sandbox \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 00:04:41.299398 containerd[1585]: time="2025-09-04T00:04:41.299351112Z" level=info msg="Container 365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:41.303060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount940877603.mount: Deactivated successfully. Sep 4 00:04:41.306118 containerd[1585]: time="2025-09-04T00:04:41.306087299Z" level=info msg="CreateContainer within sandbox \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\"" Sep 4 00:04:41.307717 containerd[1585]: time="2025-09-04T00:04:41.306481660Z" level=info msg="StartContainer for \"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\"" Sep 4 00:04:41.308666 containerd[1585]: time="2025-09-04T00:04:41.308633226Z" level=info msg="connecting to shim 365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654" address="unix:///run/containerd/s/502937af64403583500964cd30472d8dcc7ad09780235d3864c2620943a30998" protocol=ttrpc version=3 Sep 4 00:04:41.338878 systemd[1]: Started cri-containerd-365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654.scope - libcontainer container 365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654. Sep 4 00:04:41.377505 containerd[1585]: time="2025-09-04T00:04:41.377435760Z" level=info msg="StartContainer for \"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\" returns successfully" Sep 4 00:04:41.388024 systemd[1]: cri-containerd-365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654.scope: Deactivated successfully. Sep 4 00:04:41.390300 containerd[1585]: time="2025-09-04T00:04:41.390247181Z" level=info msg="received exit event container_id:\"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\" id:\"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\" pid:3256 exited_at:{seconds:1756944281 nanos:389833413}" Sep 4 00:04:41.390565 containerd[1585]: time="2025-09-04T00:04:41.390327923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\" id:\"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\" pid:3256 exited_at:{seconds:1756944281 nanos:389833413}" Sep 4 00:04:41.414724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654-rootfs.mount: Deactivated successfully. Sep 4 00:04:41.971623 kubelet[2783]: E0904 00:04:41.971563 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:42.974501 kubelet[2783]: E0904 00:04:42.974454 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:42.977093 containerd[1585]: time="2025-09-04T00:04:42.977041240Z" level=info msg="CreateContainer within sandbox \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 00:04:43.530472 containerd[1585]: time="2025-09-04T00:04:43.530411390Z" level=info msg="Container 46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:43.535091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976066097.mount: Deactivated successfully. Sep 4 00:04:43.543853 containerd[1585]: time="2025-09-04T00:04:43.543788355Z" level=info msg="CreateContainer within sandbox \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\"" Sep 4 00:04:43.544345 containerd[1585]: time="2025-09-04T00:04:43.544322809Z" level=info msg="StartContainer for \"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\"" Sep 4 00:04:43.545414 containerd[1585]: time="2025-09-04T00:04:43.545382622Z" level=info msg="connecting to shim 46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702" address="unix:///run/containerd/s/502937af64403583500964cd30472d8dcc7ad09780235d3864c2620943a30998" protocol=ttrpc version=3 Sep 4 00:04:43.573062 systemd[1]: Started cri-containerd-46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702.scope - libcontainer container 46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702. Sep 4 00:04:43.614014 containerd[1585]: time="2025-09-04T00:04:43.613959918Z" level=info msg="StartContainer for \"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\" returns successfully" Sep 4 00:04:43.632114 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:04:43.632487 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:04:43.632919 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:04:43.635169 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:04:43.637136 containerd[1585]: time="2025-09-04T00:04:43.637089154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\" id:\"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\" pid:3301 exited_at:{seconds:1756944283 nanos:636635412}" Sep 4 00:04:43.637283 containerd[1585]: time="2025-09-04T00:04:43.637246430Z" level=info msg="received exit event container_id:\"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\" id:\"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\" pid:3301 exited_at:{seconds:1756944283 nanos:636635412}" Sep 4 00:04:43.638847 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 00:04:43.639542 systemd[1]: cri-containerd-46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702.scope: Deactivated successfully. Sep 4 00:04:43.678257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:04:43.977577 kubelet[2783]: E0904 00:04:43.977536 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:43.979757 containerd[1585]: time="2025-09-04T00:04:43.979716749Z" level=info msg="CreateContainer within sandbox \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 00:04:44.069913 containerd[1585]: time="2025-09-04T00:04:44.069844525Z" level=info msg="Container 0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:44.088044 containerd[1585]: time="2025-09-04T00:04:44.087980982Z" level=info msg="CreateContainer within sandbox \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\"" Sep 4 00:04:44.090273 containerd[1585]: time="2025-09-04T00:04:44.090218617Z" level=info msg="StartContainer for \"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\"" Sep 4 00:04:44.091998 containerd[1585]: time="2025-09-04T00:04:44.091963376Z" level=info msg="connecting to shim 0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f" address="unix:///run/containerd/s/502937af64403583500964cd30472d8dcc7ad09780235d3864c2620943a30998" protocol=ttrpc version=3 Sep 4 00:04:44.102040 systemd[1]: Started sshd@9-10.0.0.105:22-10.0.0.1:44574.service - OpenSSH per-connection server daemon (10.0.0.1:44574). Sep 4 00:04:44.116072 systemd[1]: Started cri-containerd-0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f.scope - libcontainer container 0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f. Sep 4 00:04:44.165482 systemd[1]: cri-containerd-0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f.scope: Deactivated successfully. Sep 4 00:04:44.168860 containerd[1585]: time="2025-09-04T00:04:44.168808651Z" level=info msg="StartContainer for \"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\" returns successfully" Sep 4 00:04:44.169416 sshd[3336]: Accepted publickey for core from 10.0.0.1 port 44574 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:04:44.170157 containerd[1585]: time="2025-09-04T00:04:44.170069441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\" id:\"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\" pid:3348 exited_at:{seconds:1756944284 nanos:169007825}" Sep 4 00:04:44.170240 containerd[1585]: time="2025-09-04T00:04:44.170220385Z" level=info msg="received exit event container_id:\"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\" id:\"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\" pid:3348 exited_at:{seconds:1756944284 nanos:169007825}" Sep 4 00:04:44.170468 sshd-session[3336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:44.175829 systemd-logind[1569]: New session 10 of user core. Sep 4 00:04:44.184948 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 00:04:44.328514 sshd[3371]: Connection closed by 10.0.0.1 port 44574 Sep 4 00:04:44.329077 sshd-session[3336]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:44.334355 systemd[1]: sshd@9-10.0.0.105:22-10.0.0.1:44574.service: Deactivated successfully. Sep 4 00:04:44.336606 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 00:04:44.337486 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Sep 4 00:04:44.338936 systemd-logind[1569]: Removed session 10. Sep 4 00:04:44.531998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702-rootfs.mount: Deactivated successfully. Sep 4 00:04:44.982482 kubelet[2783]: E0904 00:04:44.982305 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:44.984252 containerd[1585]: time="2025-09-04T00:04:44.984213025Z" level=info msg="CreateContainer within sandbox \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 00:04:45.008456 containerd[1585]: time="2025-09-04T00:04:45.008410759Z" level=info msg="Container 1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:45.020403 containerd[1585]: time="2025-09-04T00:04:45.020366286Z" level=info msg="CreateContainer within sandbox \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\"" Sep 4 00:04:45.020731 containerd[1585]: time="2025-09-04T00:04:45.020705904Z" level=info msg="StartContainer for \"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\"" Sep 4 00:04:45.021559 containerd[1585]: time="2025-09-04T00:04:45.021525474Z" level=info msg="connecting to shim 1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4" address="unix:///run/containerd/s/502937af64403583500964cd30472d8dcc7ad09780235d3864c2620943a30998" protocol=ttrpc version=3 Sep 4 00:04:45.045828 systemd[1]: Started cri-containerd-1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4.scope - libcontainer container 1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4. Sep 4 00:04:45.075848 systemd[1]: cri-containerd-1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4.scope: Deactivated successfully. Sep 4 00:04:45.119070 containerd[1585]: time="2025-09-04T00:04:45.076595349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\" id:\"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\" pid:3401 exited_at:{seconds:1756944285 nanos:76357783}" Sep 4 00:04:45.151605 containerd[1585]: time="2025-09-04T00:04:45.151514783Z" level=info msg="received exit event container_id:\"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\" id:\"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\" pid:3401 exited_at:{seconds:1756944285 nanos:76357783}" Sep 4 00:04:45.160724 containerd[1585]: time="2025-09-04T00:04:45.160643097Z" level=info msg="StartContainer for \"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\" returns successfully" Sep 4 00:04:45.531504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4-rootfs.mount: Deactivated successfully. Sep 4 00:04:45.990227 kubelet[2783]: E0904 00:04:45.990186 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:45.993524 containerd[1585]: time="2025-09-04T00:04:45.993457621Z" level=info msg="CreateContainer within sandbox \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 00:04:46.009289 containerd[1585]: time="2025-09-04T00:04:46.009232983Z" level=info msg="Container 28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:46.013403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152867259.mount: Deactivated successfully. Sep 4 00:04:46.017428 containerd[1585]: time="2025-09-04T00:04:46.017387395Z" level=info msg="CreateContainer within sandbox \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\"" Sep 4 00:04:46.018042 containerd[1585]: time="2025-09-04T00:04:46.017930425Z" level=info msg="StartContainer for \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\"" Sep 4 00:04:46.018875 containerd[1585]: time="2025-09-04T00:04:46.018842890Z" level=info msg="connecting to shim 28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c" address="unix:///run/containerd/s/502937af64403583500964cd30472d8dcc7ad09780235d3864c2620943a30998" protocol=ttrpc version=3 Sep 4 00:04:46.041865 systemd[1]: Started cri-containerd-28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c.scope - libcontainer container 28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c. Sep 4 00:04:46.089720 containerd[1585]: time="2025-09-04T00:04:46.089652984Z" level=info msg="StartContainer for \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" returns successfully" Sep 4 00:04:46.171240 containerd[1585]: time="2025-09-04T00:04:46.171107217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" id:\"ede696900ef250c98ae2ffc6fa333a5bce171c35abbb1a8c0ed57237a88402da\" pid:3471 exited_at:{seconds:1756944286 nanos:170724188}" Sep 4 00:04:46.242785 kubelet[2783]: I0904 00:04:46.242652 2783 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 00:04:46.282037 systemd[1]: Created slice kubepods-burstable-pod54ee8460_2306_4d07_a326_54ada750e9a5.slice - libcontainer container kubepods-burstable-pod54ee8460_2306_4d07_a326_54ada750e9a5.slice. Sep 4 00:04:46.290164 systemd[1]: Created slice kubepods-burstable-poda7b8f674_f145_47fc_bced_7102b5738634.slice - libcontainer container kubepods-burstable-poda7b8f674_f145_47fc_bced_7102b5738634.slice. Sep 4 00:04:46.383330 kubelet[2783]: I0904 00:04:46.383277 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh962\" (UniqueName: \"kubernetes.io/projected/a7b8f674-f145-47fc-bced-7102b5738634-kube-api-access-gh962\") pod \"coredns-668d6bf9bc-c6vkk\" (UID: \"a7b8f674-f145-47fc-bced-7102b5738634\") " pod="kube-system/coredns-668d6bf9bc-c6vkk" Sep 4 00:04:46.383330 kubelet[2783]: I0904 00:04:46.383328 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54ee8460-2306-4d07-a326-54ada750e9a5-config-volume\") pod \"coredns-668d6bf9bc-gw67r\" (UID: \"54ee8460-2306-4d07-a326-54ada750e9a5\") " pod="kube-system/coredns-668d6bf9bc-gw67r" Sep 4 00:04:46.383533 kubelet[2783]: I0904 00:04:46.383351 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpx7l\" (UniqueName: \"kubernetes.io/projected/54ee8460-2306-4d07-a326-54ada750e9a5-kube-api-access-kpx7l\") pod \"coredns-668d6bf9bc-gw67r\" (UID: \"54ee8460-2306-4d07-a326-54ada750e9a5\") " pod="kube-system/coredns-668d6bf9bc-gw67r" Sep 4 00:04:46.383533 kubelet[2783]: I0904 00:04:46.383369 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7b8f674-f145-47fc-bced-7102b5738634-config-volume\") pod \"coredns-668d6bf9bc-c6vkk\" (UID: \"a7b8f674-f145-47fc-bced-7102b5738634\") " pod="kube-system/coredns-668d6bf9bc-c6vkk" Sep 4 00:04:46.586475 kubelet[2783]: E0904 00:04:46.586351 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:46.587100 containerd[1585]: time="2025-09-04T00:04:46.586902962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gw67r,Uid:54ee8460-2306-4d07-a326-54ada750e9a5,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:46.594714 kubelet[2783]: E0904 00:04:46.594616 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:46.595070 containerd[1585]: time="2025-09-04T00:04:46.595035132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c6vkk,Uid:a7b8f674-f145-47fc-bced-7102b5738634,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:47.005618 kubelet[2783]: E0904 00:04:47.005568 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:48.006652 kubelet[2783]: E0904 00:04:48.006611 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:48.457437 systemd-networkd[1492]: cilium_host: Link UP Sep 4 00:04:48.457646 systemd-networkd[1492]: cilium_net: Link UP Sep 4 00:04:48.457915 systemd-networkd[1492]: cilium_host: Gained carrier Sep 4 00:04:48.458137 systemd-networkd[1492]: cilium_net: Gained carrier Sep 4 00:04:48.554956 systemd-networkd[1492]: cilium_net: Gained IPv6LL Sep 4 00:04:48.592972 systemd-networkd[1492]: cilium_vxlan: Link UP Sep 4 00:04:48.592984 systemd-networkd[1492]: cilium_vxlan: Gained carrier Sep 4 00:04:48.618913 systemd-networkd[1492]: cilium_host: Gained IPv6LL Sep 4 00:04:48.857728 kernel: NET: Registered PF_ALG protocol family Sep 4 00:04:49.008332 kubelet[2783]: E0904 00:04:49.008280 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:49.341978 systemd[1]: Started sshd@10-10.0.0.105:22-10.0.0.1:44588.service - OpenSSH per-connection server daemon (10.0.0.1:44588). Sep 4 00:04:49.401968 sshd[3804]: Accepted publickey for core from 10.0.0.1 port 44588 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:04:49.403961 sshd-session[3804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:49.409232 systemd-logind[1569]: New session 11 of user core. Sep 4 00:04:49.413847 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 00:04:49.558599 sshd[3843]: Connection closed by 10.0.0.1 port 44588 Sep 4 00:04:49.561058 sshd-session[3804]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:49.565378 systemd[1]: sshd@10-10.0.0.105:22-10.0.0.1:44588.service: Deactivated successfully. Sep 4 00:04:49.568714 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 00:04:49.570897 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Sep 4 00:04:49.572405 systemd-logind[1569]: Removed session 11. Sep 4 00:04:49.587827 systemd-networkd[1492]: lxc_health: Link UP Sep 4 00:04:49.588785 systemd-networkd[1492]: lxc_health: Gained carrier Sep 4 00:04:49.703014 systemd-networkd[1492]: lxc8ddf667654e0: Link UP Sep 4 00:04:49.703727 kernel: eth0: renamed from tmpb4905 Sep 4 00:04:49.705979 systemd-networkd[1492]: lxc8ddf667654e0: Gained carrier Sep 4 00:04:49.727460 systemd-networkd[1492]: lxcb7d30bcb5e9f: Link UP Sep 4 00:04:49.736760 kernel: eth0: renamed from tmp94d6a Sep 4 00:04:49.739069 systemd-networkd[1492]: lxcb7d30bcb5e9f: Gained carrier Sep 4 00:04:50.066900 systemd-networkd[1492]: cilium_vxlan: Gained IPv6LL Sep 4 00:04:50.835200 systemd-networkd[1492]: lxc_health: Gained IPv6LL Sep 4 00:04:51.026995 systemd-networkd[1492]: lxcb7d30bcb5e9f: Gained IPv6LL Sep 4 00:04:51.154945 systemd-networkd[1492]: lxc8ddf667654e0: Gained IPv6LL Sep 4 00:04:51.361164 kubelet[2783]: E0904 00:04:51.361108 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:51.458427 kubelet[2783]: I0904 00:04:51.458240 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k7bt6" podStartSLOduration=12.085020451 podStartE2EDuration="31.458210031s" podCreationTimestamp="2025-09-04 00:04:20 +0000 UTC" firstStartedPulling="2025-09-04 00:04:21.918149486 +0000 UTC m=+5.118612755" lastFinishedPulling="2025-09-04 00:04:41.291339067 +0000 UTC m=+24.491802335" observedRunningTime="2025-09-04 00:04:47.022249696 +0000 UTC m=+30.222712984" watchObservedRunningTime="2025-09-04 00:04:51.458210031 +0000 UTC m=+34.658673299" Sep 4 00:04:52.015397 kubelet[2783]: E0904 00:04:52.015342 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:53.017865 kubelet[2783]: E0904 00:04:53.017791 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:54.383222 containerd[1585]: time="2025-09-04T00:04:54.383108080Z" level=info msg="connecting to shim 94d6a185a5d52f76e69a881442e0ef6bcbbeacc3716296a1b3f3b9cb7f295b43" address="unix:///run/containerd/s/ab3b299909d156818450a6e3299a8527ceaca0bca2890e8a45adaf4d5476199f" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:54.383943 containerd[1585]: time="2025-09-04T00:04:54.383183462Z" level=info msg="connecting to shim b4905278b3f02c331a1d9e4ef1259a68a579cdb63c805a06963ec7048081291d" address="unix:///run/containerd/s/5b66ef419a4f062b41dc2d6172f0cd1b6ddafed9f8ec4b61c0ebb523c356592b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:54.416870 systemd[1]: Started cri-containerd-94d6a185a5d52f76e69a881442e0ef6bcbbeacc3716296a1b3f3b9cb7f295b43.scope - libcontainer container 94d6a185a5d52f76e69a881442e0ef6bcbbeacc3716296a1b3f3b9cb7f295b43. Sep 4 00:04:54.435880 systemd[1]: Started cri-containerd-b4905278b3f02c331a1d9e4ef1259a68a579cdb63c805a06963ec7048081291d.scope - libcontainer container b4905278b3f02c331a1d9e4ef1259a68a579cdb63c805a06963ec7048081291d. Sep 4 00:04:54.442005 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 00:04:54.453567 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 00:04:54.482258 containerd[1585]: time="2025-09-04T00:04:54.482213934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c6vkk,Uid:a7b8f674-f145-47fc-bced-7102b5738634,Namespace:kube-system,Attempt:0,} returns sandbox id \"94d6a185a5d52f76e69a881442e0ef6bcbbeacc3716296a1b3f3b9cb7f295b43\"" Sep 4 00:04:54.483045 kubelet[2783]: E0904 00:04:54.483004 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:54.486878 containerd[1585]: time="2025-09-04T00:04:54.486780207Z" level=info msg="CreateContainer within sandbox \"94d6a185a5d52f76e69a881442e0ef6bcbbeacc3716296a1b3f3b9cb7f295b43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:04:54.491107 containerd[1585]: time="2025-09-04T00:04:54.491063587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gw67r,Uid:54ee8460-2306-4d07-a326-54ada750e9a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4905278b3f02c331a1d9e4ef1259a68a579cdb63c805a06963ec7048081291d\"" Sep 4 00:04:54.492081 kubelet[2783]: E0904 00:04:54.491908 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:54.493501 containerd[1585]: time="2025-09-04T00:04:54.493470135Z" level=info msg="CreateContainer within sandbox \"b4905278b3f02c331a1d9e4ef1259a68a579cdb63c805a06963ec7048081291d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:04:54.502195 containerd[1585]: time="2025-09-04T00:04:54.502146193Z" level=info msg="Container d12c67bf28504dd16fc585dbab6e3f2f94e414a0d575e2f2ea44edce792db73c: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:54.513680 containerd[1585]: time="2025-09-04T00:04:54.513615915Z" level=info msg="Container bbd31b812766a333741ad91a04e031f93f2bf736c2579fdb3c263d2ffa1e575d: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:54.519474 containerd[1585]: time="2025-09-04T00:04:54.519426614Z" level=info msg="CreateContainer within sandbox \"94d6a185a5d52f76e69a881442e0ef6bcbbeacc3716296a1b3f3b9cb7f295b43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d12c67bf28504dd16fc585dbab6e3f2f94e414a0d575e2f2ea44edce792db73c\"" Sep 4 00:04:54.521060 containerd[1585]: time="2025-09-04T00:04:54.520076214Z" level=info msg="StartContainer for \"d12c67bf28504dd16fc585dbab6e3f2f94e414a0d575e2f2ea44edce792db73c\"" Sep 4 00:04:54.521270 containerd[1585]: time="2025-09-04T00:04:54.521225822Z" level=info msg="connecting to shim d12c67bf28504dd16fc585dbab6e3f2f94e414a0d575e2f2ea44edce792db73c" address="unix:///run/containerd/s/ab3b299909d156818450a6e3299a8527ceaca0bca2890e8a45adaf4d5476199f" protocol=ttrpc version=3 Sep 4 00:04:54.523609 containerd[1585]: time="2025-09-04T00:04:54.523567127Z" level=info msg="CreateContainer within sandbox \"b4905278b3f02c331a1d9e4ef1259a68a579cdb63c805a06963ec7048081291d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bbd31b812766a333741ad91a04e031f93f2bf736c2579fdb3c263d2ffa1e575d\"" Sep 4 00:04:54.524379 containerd[1585]: time="2025-09-04T00:04:54.524272512Z" level=info msg="StartContainer for \"bbd31b812766a333741ad91a04e031f93f2bf736c2579fdb3c263d2ffa1e575d\"" Sep 4 00:04:54.525389 containerd[1585]: time="2025-09-04T00:04:54.525355925Z" level=info msg="connecting to shim bbd31b812766a333741ad91a04e031f93f2bf736c2579fdb3c263d2ffa1e575d" address="unix:///run/containerd/s/5b66ef419a4f062b41dc2d6172f0cd1b6ddafed9f8ec4b61c0ebb523c356592b" protocol=ttrpc version=3 Sep 4 00:04:54.548922 systemd[1]: Started cri-containerd-bbd31b812766a333741ad91a04e031f93f2bf736c2579fdb3c263d2ffa1e575d.scope - libcontainer container bbd31b812766a333741ad91a04e031f93f2bf736c2579fdb3c263d2ffa1e575d. Sep 4 00:04:54.553538 systemd[1]: Started cri-containerd-d12c67bf28504dd16fc585dbab6e3f2f94e414a0d575e2f2ea44edce792db73c.scope - libcontainer container d12c67bf28504dd16fc585dbab6e3f2f94e414a0d575e2f2ea44edce792db73c. Sep 4 00:04:54.569969 systemd[1]: Started sshd@11-10.0.0.105:22-10.0.0.1:58442.service - OpenSSH per-connection server daemon (10.0.0.1:58442). Sep 4 00:04:54.678570 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 58442 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:04:54.681015 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:54.686525 systemd-logind[1569]: New session 12 of user core. Sep 4 00:04:54.693951 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 00:04:54.868764 sshd[4121]: Connection closed by 10.0.0.1 port 58442 Sep 4 00:04:54.869097 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:54.874291 systemd[1]: sshd@11-10.0.0.105:22-10.0.0.1:58442.service: Deactivated successfully. Sep 4 00:04:54.876999 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 00:04:54.877776 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Sep 4 00:04:54.879592 systemd-logind[1569]: Removed session 12. Sep 4 00:04:54.898721 containerd[1585]: time="2025-09-04T00:04:54.898647153Z" level=info msg="StartContainer for \"d12c67bf28504dd16fc585dbab6e3f2f94e414a0d575e2f2ea44edce792db73c\" returns successfully" Sep 4 00:04:54.899242 containerd[1585]: time="2025-09-04T00:04:54.899206403Z" level=info msg="StartContainer for \"bbd31b812766a333741ad91a04e031f93f2bf736c2579fdb3c263d2ffa1e575d\" returns successfully" Sep 4 00:04:55.023847 kubelet[2783]: E0904 00:04:55.023602 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:55.026923 kubelet[2783]: E0904 00:04:55.026889 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:55.597522 kubelet[2783]: I0904 00:04:55.597274 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gw67r" podStartSLOduration=34.597257804 podStartE2EDuration="34.597257804s" podCreationTimestamp="2025-09-04 00:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:55.597036428 +0000 UTC m=+38.797499696" watchObservedRunningTime="2025-09-04 00:04:55.597257804 +0000 UTC m=+38.797721072" Sep 4 00:04:55.597522 kubelet[2783]: I0904 00:04:55.597357 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c6vkk" podStartSLOduration=34.597353814 podStartE2EDuration="34.597353814s" podCreationTimestamp="2025-09-04 00:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:55.204121457 +0000 UTC m=+38.404584745" watchObservedRunningTime="2025-09-04 00:04:55.597353814 +0000 UTC m=+38.797817082" Sep 4 00:04:56.028182 kubelet[2783]: E0904 00:04:56.028141 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:56.028338 kubelet[2783]: E0904 00:04:56.028150 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:57.030080 kubelet[2783]: E0904 00:04:57.029986 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:57.030474 kubelet[2783]: E0904 00:04:57.030151 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:59.883727 systemd[1]: Started sshd@12-10.0.0.105:22-10.0.0.1:58456.service - OpenSSH per-connection server daemon (10.0.0.1:58456). Sep 4 00:04:59.930937 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 58456 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:04:59.932714 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:59.937399 systemd-logind[1569]: New session 13 of user core. Sep 4 00:04:59.948832 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 00:05:00.066966 sshd[4158]: Connection closed by 10.0.0.1 port 58456 Sep 4 00:05:00.067370 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:00.076322 systemd[1]: sshd@12-10.0.0.105:22-10.0.0.1:58456.service: Deactivated successfully. Sep 4 00:05:00.078133 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 00:05:00.079899 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Sep 4 00:05:00.081783 systemd[1]: Started sshd@13-10.0.0.105:22-10.0.0.1:55190.service - OpenSSH per-connection server daemon (10.0.0.1:55190). Sep 4 00:05:00.084274 systemd-logind[1569]: Removed session 13. Sep 4 00:05:00.139052 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 55190 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:00.140783 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:00.145490 systemd-logind[1569]: New session 14 of user core. Sep 4 00:05:00.152867 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 00:05:00.295773 sshd[4174]: Connection closed by 10.0.0.1 port 55190 Sep 4 00:05:00.296790 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:00.307742 systemd[1]: sshd@13-10.0.0.105:22-10.0.0.1:55190.service: Deactivated successfully. Sep 4 00:05:00.310737 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 00:05:00.312579 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Sep 4 00:05:00.317125 systemd[1]: Started sshd@14-10.0.0.105:22-10.0.0.1:55206.service - OpenSSH per-connection server daemon (10.0.0.1:55206). Sep 4 00:05:00.318437 systemd-logind[1569]: Removed session 14. Sep 4 00:05:00.366010 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 55206 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:00.367634 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:00.372281 systemd-logind[1569]: New session 15 of user core. Sep 4 00:05:00.382844 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 00:05:00.495225 sshd[4187]: Connection closed by 10.0.0.1 port 55206 Sep 4 00:05:00.495556 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:00.498549 systemd[1]: sshd@14-10.0.0.105:22-10.0.0.1:55206.service: Deactivated successfully. Sep 4 00:05:00.500625 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 00:05:00.503670 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Sep 4 00:05:00.506513 systemd-logind[1569]: Removed session 15. Sep 4 00:05:05.511005 systemd[1]: Started sshd@15-10.0.0.105:22-10.0.0.1:55222.service - OpenSSH per-connection server daemon (10.0.0.1:55222). Sep 4 00:05:05.549615 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 55222 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:05.551167 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:05.555659 systemd-logind[1569]: New session 16 of user core. Sep 4 00:05:05.565837 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 00:05:05.732388 sshd[4204]: Connection closed by 10.0.0.1 port 55222 Sep 4 00:05:05.732888 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:05.739264 systemd[1]: sshd@15-10.0.0.105:22-10.0.0.1:55222.service: Deactivated successfully. Sep 4 00:05:05.742515 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 00:05:05.744916 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Sep 4 00:05:05.746544 systemd-logind[1569]: Removed session 16. Sep 4 00:05:10.743383 systemd[1]: Started sshd@16-10.0.0.105:22-10.0.0.1:52024.service - OpenSSH per-connection server daemon (10.0.0.1:52024). Sep 4 00:05:10.794995 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 52024 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:10.796488 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:10.800993 systemd-logind[1569]: New session 17 of user core. Sep 4 00:05:10.807873 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 00:05:10.914883 sshd[4219]: Connection closed by 10.0.0.1 port 52024 Sep 4 00:05:10.915198 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:10.919043 systemd[1]: sshd@16-10.0.0.105:22-10.0.0.1:52024.service: Deactivated successfully. Sep 4 00:05:10.920941 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 00:05:10.921949 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Sep 4 00:05:10.923168 systemd-logind[1569]: Removed session 17. Sep 4 00:05:15.940257 systemd[1]: Started sshd@17-10.0.0.105:22-10.0.0.1:52036.service - OpenSSH per-connection server daemon (10.0.0.1:52036). Sep 4 00:05:15.986936 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 52036 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:15.988549 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:15.993046 systemd-logind[1569]: New session 18 of user core. Sep 4 00:05:16.002832 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 00:05:16.163335 sshd[4235]: Connection closed by 10.0.0.1 port 52036 Sep 4 00:05:16.163682 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:16.180846 systemd[1]: sshd@17-10.0.0.105:22-10.0.0.1:52036.service: Deactivated successfully. Sep 4 00:05:16.182923 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 00:05:16.183784 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Sep 4 00:05:16.187479 systemd[1]: Started sshd@18-10.0.0.105:22-10.0.0.1:52038.service - OpenSSH per-connection server daemon (10.0.0.1:52038). Sep 4 00:05:16.188243 systemd-logind[1569]: Removed session 18. Sep 4 00:05:16.236924 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 52038 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:16.238641 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:16.244254 systemd-logind[1569]: New session 19 of user core. Sep 4 00:05:16.250880 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 00:05:16.670143 sshd[4250]: Connection closed by 10.0.0.1 port 52038 Sep 4 00:05:16.671014 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:16.681522 systemd[1]: sshd@18-10.0.0.105:22-10.0.0.1:52038.service: Deactivated successfully. Sep 4 00:05:16.683364 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 00:05:16.684225 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Sep 4 00:05:16.687583 systemd[1]: Started sshd@19-10.0.0.105:22-10.0.0.1:52042.service - OpenSSH per-connection server daemon (10.0.0.1:52042). Sep 4 00:05:16.688325 systemd-logind[1569]: Removed session 19. Sep 4 00:05:16.735903 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 52042 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:16.737623 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:16.742503 systemd-logind[1569]: New session 20 of user core. Sep 4 00:05:16.755009 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 00:05:17.270166 sshd[4264]: Connection closed by 10.0.0.1 port 52042 Sep 4 00:05:17.270522 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:17.279900 systemd[1]: sshd@19-10.0.0.105:22-10.0.0.1:52042.service: Deactivated successfully. Sep 4 00:05:17.283244 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 00:05:17.285530 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Sep 4 00:05:17.290426 systemd[1]: Started sshd@20-10.0.0.105:22-10.0.0.1:52056.service - OpenSSH per-connection server daemon (10.0.0.1:52056). Sep 4 00:05:17.293218 systemd-logind[1569]: Removed session 20. Sep 4 00:05:17.337604 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 52056 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:17.339378 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:17.344439 systemd-logind[1569]: New session 21 of user core. Sep 4 00:05:17.353885 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 00:05:17.627377 sshd[4288]: Connection closed by 10.0.0.1 port 52056 Sep 4 00:05:17.627642 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:17.640587 systemd[1]: sshd@20-10.0.0.105:22-10.0.0.1:52056.service: Deactivated successfully. Sep 4 00:05:17.644515 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 00:05:17.646543 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Sep 4 00:05:17.649804 systemd[1]: Started sshd@21-10.0.0.105:22-10.0.0.1:52070.service - OpenSSH per-connection server daemon (10.0.0.1:52070). Sep 4 00:05:17.651323 systemd-logind[1569]: Removed session 21. Sep 4 00:05:17.697900 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 52070 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:17.699977 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:17.705539 systemd-logind[1569]: New session 22 of user core. Sep 4 00:05:17.720014 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 00:05:17.839183 sshd[4301]: Connection closed by 10.0.0.1 port 52070 Sep 4 00:05:17.839513 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:17.844149 systemd[1]: sshd@21-10.0.0.105:22-10.0.0.1:52070.service: Deactivated successfully. Sep 4 00:05:17.846732 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 00:05:17.847728 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Sep 4 00:05:17.849161 systemd-logind[1569]: Removed session 22. Sep 4 00:05:22.854207 systemd[1]: Started sshd@22-10.0.0.105:22-10.0.0.1:38226.service - OpenSSH per-connection server daemon (10.0.0.1:38226). Sep 4 00:05:22.906798 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 38226 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:22.908299 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:22.913022 systemd-logind[1569]: New session 23 of user core. Sep 4 00:05:22.922858 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 00:05:23.032346 sshd[4320]: Connection closed by 10.0.0.1 port 38226 Sep 4 00:05:23.032746 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:23.037682 systemd[1]: sshd@22-10.0.0.105:22-10.0.0.1:38226.service: Deactivated successfully. Sep 4 00:05:23.040443 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 00:05:23.041797 systemd-logind[1569]: Session 23 logged out. Waiting for processes to exit. Sep 4 00:05:23.044099 systemd-logind[1569]: Removed session 23. Sep 4 00:05:28.053850 systemd[1]: Started sshd@23-10.0.0.105:22-10.0.0.1:38236.service - OpenSSH per-connection server daemon (10.0.0.1:38236). Sep 4 00:05:28.111494 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 38236 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:28.113749 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:28.119362 systemd-logind[1569]: New session 24 of user core. Sep 4 00:05:28.128961 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 00:05:28.247592 sshd[4336]: Connection closed by 10.0.0.1 port 38236 Sep 4 00:05:28.247957 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:28.252752 systemd[1]: sshd@23-10.0.0.105:22-10.0.0.1:38236.service: Deactivated successfully. Sep 4 00:05:28.254949 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 00:05:28.256036 systemd-logind[1569]: Session 24 logged out. Waiting for processes to exit. Sep 4 00:05:28.257856 systemd-logind[1569]: Removed session 24. Sep 4 00:05:33.260711 systemd[1]: Started sshd@24-10.0.0.105:22-10.0.0.1:44514.service - OpenSSH per-connection server daemon (10.0.0.1:44514). Sep 4 00:05:33.314481 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 44514 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:33.316023 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:33.320143 systemd-logind[1569]: New session 25 of user core. Sep 4 00:05:33.326810 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 00:05:33.432248 sshd[4351]: Connection closed by 10.0.0.1 port 44514 Sep 4 00:05:33.432565 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:33.435726 systemd[1]: sshd@24-10.0.0.105:22-10.0.0.1:44514.service: Deactivated successfully. Sep 4 00:05:33.438046 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 00:05:33.439709 systemd-logind[1569]: Session 25 logged out. Waiting for processes to exit. Sep 4 00:05:33.441396 systemd-logind[1569]: Removed session 25. Sep 4 00:05:35.884242 kubelet[2783]: E0904 00:05:35.884187 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:35.884832 kubelet[2783]: E0904 00:05:35.884392 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:38.450292 systemd[1]: Started sshd@25-10.0.0.105:22-10.0.0.1:44530.service - OpenSSH per-connection server daemon (10.0.0.1:44530). Sep 4 00:05:38.510417 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 44530 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:38.511967 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:38.516699 systemd-logind[1569]: New session 26 of user core. Sep 4 00:05:38.524865 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 00:05:38.636885 sshd[4366]: Connection closed by 10.0.0.1 port 44530 Sep 4 00:05:38.637272 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:38.654291 systemd[1]: sshd@25-10.0.0.105:22-10.0.0.1:44530.service: Deactivated successfully. Sep 4 00:05:38.656563 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 00:05:38.657488 systemd-logind[1569]: Session 26 logged out. Waiting for processes to exit. Sep 4 00:05:38.660784 systemd[1]: Started sshd@26-10.0.0.105:22-10.0.0.1:44536.service - OpenSSH per-connection server daemon (10.0.0.1:44536). Sep 4 00:05:38.661506 systemd-logind[1569]: Removed session 26. Sep 4 00:05:38.718613 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 44536 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:38.720142 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:38.725041 systemd-logind[1569]: New session 27 of user core. Sep 4 00:05:38.734839 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 00:05:40.163940 containerd[1585]: time="2025-09-04T00:05:40.163875791Z" level=info msg="StopContainer for \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\" with timeout 30 (s)" Sep 4 00:05:40.172426 containerd[1585]: time="2025-09-04T00:05:40.172387303Z" level=info msg="Stop container \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\" with signal terminated" Sep 4 00:05:40.187009 systemd[1]: cri-containerd-2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189.scope: Deactivated successfully. Sep 4 00:05:40.188920 containerd[1585]: time="2025-09-04T00:05:40.188868062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\" id:\"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\" pid:3190 exited_at:{seconds:1756944340 nanos:188358608}" Sep 4 00:05:40.188984 containerd[1585]: time="2025-09-04T00:05:40.188929017Z" level=info msg="received exit event container_id:\"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\" id:\"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\" pid:3190 exited_at:{seconds:1756944340 nanos:188358608}" Sep 4 00:05:40.204854 containerd[1585]: time="2025-09-04T00:05:40.204797919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" id:\"5ec0999af0e313d2e94fdea5a0ec1f99f278815a9e3206164a28bffce94c7d94\" pid:4408 exited_at:{seconds:1756944340 nanos:204437576}" Sep 4 00:05:40.205810 containerd[1585]: time="2025-09-04T00:05:40.205376744Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:05:40.211753 containerd[1585]: time="2025-09-04T00:05:40.211699224Z" level=info msg="StopContainer for \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" with timeout 2 (s)" Sep 4 00:05:40.212370 containerd[1585]: time="2025-09-04T00:05:40.212335047Z" level=info msg="Stop container \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" with signal terminated" Sep 4 00:05:40.217005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189-rootfs.mount: Deactivated successfully. Sep 4 00:05:40.222794 systemd-networkd[1492]: lxc_health: Link DOWN Sep 4 00:05:40.222943 systemd-networkd[1492]: lxc_health: Lost carrier Sep 4 00:05:40.244350 systemd[1]: cri-containerd-28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c.scope: Deactivated successfully. Sep 4 00:05:40.244814 systemd[1]: cri-containerd-28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c.scope: Consumed 7.277s CPU time, 126.4M memory peak, 156K read from disk, 13.3M written to disk. Sep 4 00:05:40.246610 containerd[1585]: time="2025-09-04T00:05:40.246549447Z" level=info msg="received exit event container_id:\"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" id:\"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" pid:3439 exited_at:{seconds:1756944340 nanos:246330592}" Sep 4 00:05:40.246990 containerd[1585]: time="2025-09-04T00:05:40.246652813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" id:\"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" pid:3439 exited_at:{seconds:1756944340 nanos:246330592}" Sep 4 00:05:40.261930 containerd[1585]: time="2025-09-04T00:05:40.261773327Z" level=info msg="StopContainer for \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\" returns successfully" Sep 4 00:05:40.262635 containerd[1585]: time="2025-09-04T00:05:40.262606765Z" level=info msg="StopPodSandbox for \"9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078\"" Sep 4 00:05:40.262739 containerd[1585]: time="2025-09-04T00:05:40.262675575Z" level=info msg="Container to stop \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:05:40.270463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c-rootfs.mount: Deactivated successfully. Sep 4 00:05:40.272160 systemd[1]: cri-containerd-9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078.scope: Deactivated successfully. Sep 4 00:05:40.273855 containerd[1585]: time="2025-09-04T00:05:40.273228612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078\" id:\"9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078\" pid:2941 exit_status:137 exited_at:{seconds:1756944340 nanos:272924968}" Sep 4 00:05:40.303213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078-rootfs.mount: Deactivated successfully. Sep 4 00:05:40.369133 containerd[1585]: time="2025-09-04T00:05:40.368713242Z" level=info msg="shim disconnected" id=9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078 namespace=k8s.io Sep 4 00:05:40.369133 containerd[1585]: time="2025-09-04T00:05:40.368847515Z" level=warning msg="cleaning up after shim disconnected" id=9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078 namespace=k8s.io Sep 4 00:05:40.386117 containerd[1585]: time="2025-09-04T00:05:40.368860360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 00:05:40.386117 containerd[1585]: time="2025-09-04T00:05:40.372197628Z" level=info msg="StopContainer for \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" returns successfully" Sep 4 00:05:40.386741 containerd[1585]: time="2025-09-04T00:05:40.386663662Z" level=info msg="StopPodSandbox for \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\"" Sep 4 00:05:40.386889 containerd[1585]: time="2025-09-04T00:05:40.386775694Z" level=info msg="Container to stop \"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:05:40.386889 containerd[1585]: time="2025-09-04T00:05:40.386796164Z" level=info msg="Container to stop \"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:05:40.386889 containerd[1585]: time="2025-09-04T00:05:40.386809048Z" level=info msg="Container to stop \"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:05:40.386889 containerd[1585]: time="2025-09-04T00:05:40.386822864Z" level=info msg="Container to stop \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:05:40.386889 containerd[1585]: time="2025-09-04T00:05:40.386833204Z" level=info msg="Container to stop \"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:05:40.396218 systemd[1]: cri-containerd-fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845.scope: Deactivated successfully. Sep 4 00:05:40.421948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845-rootfs.mount: Deactivated successfully. Sep 4 00:05:40.428247 containerd[1585]: time="2025-09-04T00:05:40.428206365Z" level=info msg="shim disconnected" id=fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845 namespace=k8s.io Sep 4 00:05:40.428398 containerd[1585]: time="2025-09-04T00:05:40.428249868Z" level=warning msg="cleaning up after shim disconnected" id=fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845 namespace=k8s.io Sep 4 00:05:40.428398 containerd[1585]: time="2025-09-04T00:05:40.428261099Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 00:05:40.431832 containerd[1585]: time="2025-09-04T00:05:40.431792634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" id:\"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" pid:2997 exit_status:137 exited_at:{seconds:1756944340 nanos:396092943}" Sep 4 00:05:40.434532 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845-shm.mount: Deactivated successfully. Sep 4 00:05:40.434671 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078-shm.mount: Deactivated successfully. Sep 4 00:05:40.448736 containerd[1585]: time="2025-09-04T00:05:40.448421544Z" level=info msg="TearDown network for sandbox \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" successfully" Sep 4 00:05:40.448736 containerd[1585]: time="2025-09-04T00:05:40.448467752Z" level=info msg="StopPodSandbox for \"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" returns successfully" Sep 4 00:05:40.449414 containerd[1585]: time="2025-09-04T00:05:40.449359129Z" level=info msg="TearDown network for sandbox \"9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078\" successfully" Sep 4 00:05:40.449414 containerd[1585]: time="2025-09-04T00:05:40.449389747Z" level=info msg="StopPodSandbox for \"9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078\" returns successfully" Sep 4 00:05:40.455542 containerd[1585]: time="2025-09-04T00:05:40.455486689Z" level=info msg="received exit event sandbox_id:\"fb3c47dc082ccfa7131edd0315781e7fe371e9738a2683ce1a24c45396f12845\" exit_status:137 exited_at:{seconds:1756944340 nanos:396092943}" Sep 4 00:05:40.455924 containerd[1585]: time="2025-09-04T00:05:40.455835599Z" level=info msg="received exit event sandbox_id:\"9359e9ffae6c20788585f071e093b5393603bbf865f8178e8b695544aa1c8078\" exit_status:137 exited_at:{seconds:1756944340 nanos:272924968}" Sep 4 00:05:40.540438 kubelet[2783]: I0904 00:05:40.540350 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cni-path\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.540438 kubelet[2783]: I0904 00:05:40.540431 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-bpf-maps\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541086 kubelet[2783]: I0904 00:05:40.540464 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cilium-run\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541086 kubelet[2783]: I0904 00:05:40.540474 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cni-path" (OuterVolumeSpecName: "cni-path") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:05:40.541086 kubelet[2783]: I0904 00:05:40.540500 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sct69\" (UniqueName: \"kubernetes.io/projected/5bb845bd-a327-4339-bbb4-cf32dba7a170-kube-api-access-sct69\") pod \"5bb845bd-a327-4339-bbb4-cf32dba7a170\" (UID: \"5bb845bd-a327-4339-bbb4-cf32dba7a170\") " Sep 4 00:05:40.541086 kubelet[2783]: I0904 00:05:40.540536 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cilium-config-path\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541086 kubelet[2783]: I0904 00:05:40.540548 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:05:40.541086 kubelet[2783]: I0904 00:05:40.540554 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-lib-modules\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541243 kubelet[2783]: I0904 00:05:40.540585 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:05:40.541243 kubelet[2783]: I0904 00:05:40.540592 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-xtables-lock\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541243 kubelet[2783]: I0904 00:05:40.540616 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-host-proc-sys-kernel\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541243 kubelet[2783]: I0904 00:05:40.540647 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-etc-cni-netd\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541243 kubelet[2783]: I0904 00:05:40.540671 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl7nt\" (UniqueName: \"kubernetes.io/projected/ab33ad0f-9561-4a15-a7bd-d964794c3b10-kube-api-access-xl7nt\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541243 kubelet[2783]: I0904 00:05:40.540746 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab33ad0f-9561-4a15-a7bd-d964794c3b10-clustermesh-secrets\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541416 kubelet[2783]: I0904 00:05:40.540772 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bb845bd-a327-4339-bbb4-cf32dba7a170-cilium-config-path\") pod \"5bb845bd-a327-4339-bbb4-cf32dba7a170\" (UID: \"5bb845bd-a327-4339-bbb4-cf32dba7a170\") " Sep 4 00:05:40.541416 kubelet[2783]: I0904 00:05:40.540794 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-host-proc-sys-net\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541416 kubelet[2783]: I0904 00:05:40.540815 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-hostproc\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541416 kubelet[2783]: I0904 00:05:40.540838 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab33ad0f-9561-4a15-a7bd-d964794c3b10-hubble-tls\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541416 kubelet[2783]: I0904 00:05:40.540858 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cilium-cgroup\") pod \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\" (UID: \"ab33ad0f-9561-4a15-a7bd-d964794c3b10\") " Sep 4 00:05:40.541416 kubelet[2783]: I0904 00:05:40.540905 2783 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.541416 kubelet[2783]: I0904 00:05:40.540919 2783 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.541579 kubelet[2783]: I0904 00:05:40.540932 2783 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.541579 kubelet[2783]: I0904 00:05:40.540594 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:05:40.541579 kubelet[2783]: I0904 00:05:40.540640 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:05:40.541579 kubelet[2783]: I0904 00:05:40.540957 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:05:40.541579 kubelet[2783]: I0904 00:05:40.540971 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:05:40.541749 kubelet[2783]: I0904 00:05:40.541021 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:05:40.541749 kubelet[2783]: I0904 00:05:40.541495 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:05:40.543710 kubelet[2783]: I0904 00:05:40.542731 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-hostproc" (OuterVolumeSpecName: "hostproc") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:05:40.545548 kubelet[2783]: I0904 00:05:40.545441 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab33ad0f-9561-4a15-a7bd-d964794c3b10-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 00:05:40.546841 kubelet[2783]: I0904 00:05:40.546788 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb845bd-a327-4339-bbb4-cf32dba7a170-kube-api-access-sct69" (OuterVolumeSpecName: "kube-api-access-sct69") pod "5bb845bd-a327-4339-bbb4-cf32dba7a170" (UID: "5bb845bd-a327-4339-bbb4-cf32dba7a170"). InnerVolumeSpecName "kube-api-access-sct69". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:05:40.547494 kubelet[2783]: I0904 00:05:40.547464 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 00:05:40.548614 kubelet[2783]: I0904 00:05:40.548536 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab33ad0f-9561-4a15-a7bd-d964794c3b10-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:05:40.549266 kubelet[2783]: I0904 00:05:40.549234 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab33ad0f-9561-4a15-a7bd-d964794c3b10-kube-api-access-xl7nt" (OuterVolumeSpecName: "kube-api-access-xl7nt") pod "ab33ad0f-9561-4a15-a7bd-d964794c3b10" (UID: "ab33ad0f-9561-4a15-a7bd-d964794c3b10"). InnerVolumeSpecName "kube-api-access-xl7nt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:05:40.549772 kubelet[2783]: I0904 00:05:40.549716 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb845bd-a327-4339-bbb4-cf32dba7a170-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5bb845bd-a327-4339-bbb4-cf32dba7a170" (UID: "5bb845bd-a327-4339-bbb4-cf32dba7a170"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 00:05:40.641909 kubelet[2783]: I0904 00:05:40.641826 2783 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.641909 kubelet[2783]: I0904 00:05:40.641872 2783 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab33ad0f-9561-4a15-a7bd-d964794c3b10-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.641909 kubelet[2783]: I0904 00:05:40.641880 2783 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.641909 kubelet[2783]: I0904 00:05:40.641889 2783 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.641909 kubelet[2783]: I0904 00:05:40.641897 2783 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.641909 kubelet[2783]: I0904 00:05:40.641907 2783 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sct69\" (UniqueName: \"kubernetes.io/projected/5bb845bd-a327-4339-bbb4-cf32dba7a170-kube-api-access-sct69\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.641909 kubelet[2783]: I0904 00:05:40.641918 2783 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab33ad0f-9561-4a15-a7bd-d964794c3b10-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.641909 kubelet[2783]: I0904 00:05:40.641926 2783 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.642340 kubelet[2783]: I0904 00:05:40.641934 2783 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.642340 kubelet[2783]: I0904 00:05:40.641944 2783 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab33ad0f-9561-4a15-a7bd-d964794c3b10-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.642340 kubelet[2783]: I0904 00:05:40.641969 2783 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xl7nt\" (UniqueName: \"kubernetes.io/projected/ab33ad0f-9561-4a15-a7bd-d964794c3b10-kube-api-access-xl7nt\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.642340 kubelet[2783]: I0904 00:05:40.641978 2783 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab33ad0f-9561-4a15-a7bd-d964794c3b10-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.642340 kubelet[2783]: I0904 00:05:40.641986 2783 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bb845bd-a327-4339-bbb4-cf32dba7a170-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 00:05:40.893024 systemd[1]: Removed slice kubepods-burstable-podab33ad0f_9561_4a15_a7bd_d964794c3b10.slice - libcontainer container kubepods-burstable-podab33ad0f_9561_4a15_a7bd_d964794c3b10.slice. Sep 4 00:05:40.893164 systemd[1]: kubepods-burstable-podab33ad0f_9561_4a15_a7bd_d964794c3b10.slice: Consumed 7.400s CPU time, 126.7M memory peak, 160K read from disk, 13.3M written to disk. Sep 4 00:05:40.894388 systemd[1]: Removed slice kubepods-besteffort-pod5bb845bd_a327_4339_bbb4_cf32dba7a170.slice - libcontainer container kubepods-besteffort-pod5bb845bd_a327_4339_bbb4_cf32dba7a170.slice. Sep 4 00:05:41.125948 kubelet[2783]: I0904 00:05:41.125909 2783 scope.go:117] "RemoveContainer" containerID="2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189" Sep 4 00:05:41.127734 containerd[1585]: time="2025-09-04T00:05:41.127333879Z" level=info msg="RemoveContainer for \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\"" Sep 4 00:05:41.131988 containerd[1585]: time="2025-09-04T00:05:41.131961898Z" level=info msg="RemoveContainer for \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\" returns successfully" Sep 4 00:05:41.137452 kubelet[2783]: I0904 00:05:41.137406 2783 scope.go:117] "RemoveContainer" containerID="2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189" Sep 4 00:05:41.137738 containerd[1585]: time="2025-09-04T00:05:41.137669000Z" level=error msg="ContainerStatus for \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\": not found" Sep 4 00:05:41.142003 kubelet[2783]: E0904 00:05:41.141967 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\": not found" containerID="2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189" Sep 4 00:05:41.142105 kubelet[2783]: I0904 00:05:41.142010 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189"} err="failed to get container status \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\": rpc error: code = NotFound desc = an error occurred when try to find container \"2096f12572e44511e23b73e94067bde23b7b745fe946eb0e3fc6f8dbf56e0189\": not found" Sep 4 00:05:41.142153 kubelet[2783]: I0904 00:05:41.142107 2783 scope.go:117] "RemoveContainer" containerID="28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c" Sep 4 00:05:41.149522 containerd[1585]: time="2025-09-04T00:05:41.149400242Z" level=info msg="RemoveContainer for \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\"" Sep 4 00:05:41.160180 containerd[1585]: time="2025-09-04T00:05:41.159997490Z" level=info msg="RemoveContainer for \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" returns successfully" Sep 4 00:05:41.160674 kubelet[2783]: I0904 00:05:41.160560 2783 scope.go:117] "RemoveContainer" containerID="1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4" Sep 4 00:05:41.169786 containerd[1585]: time="2025-09-04T00:05:41.169724910Z" level=info msg="RemoveContainer for \"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\"" Sep 4 00:05:41.175208 containerd[1585]: time="2025-09-04T00:05:41.175174614Z" level=info msg="RemoveContainer for \"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\" returns successfully" Sep 4 00:05:41.175462 kubelet[2783]: I0904 00:05:41.175437 2783 scope.go:117] "RemoveContainer" containerID="0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f" Sep 4 00:05:41.177986 containerd[1585]: time="2025-09-04T00:05:41.177958974Z" level=info msg="RemoveContainer for \"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\"" Sep 4 00:05:41.182791 containerd[1585]: time="2025-09-04T00:05:41.182758678Z" level=info msg="RemoveContainer for \"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\" returns successfully" Sep 4 00:05:41.182911 kubelet[2783]: I0904 00:05:41.182888 2783 scope.go:117] "RemoveContainer" containerID="46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702" Sep 4 00:05:41.184116 containerd[1585]: time="2025-09-04T00:05:41.184092102Z" level=info msg="RemoveContainer for \"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\"" Sep 4 00:05:41.188368 containerd[1585]: time="2025-09-04T00:05:41.188332077Z" level=info msg="RemoveContainer for \"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\" returns successfully" Sep 4 00:05:41.188522 kubelet[2783]: I0904 00:05:41.188484 2783 scope.go:117] "RemoveContainer" containerID="365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654" Sep 4 00:05:41.190049 containerd[1585]: time="2025-09-04T00:05:41.189617791Z" level=info msg="RemoveContainer for \"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\"" Sep 4 00:05:41.193305 containerd[1585]: time="2025-09-04T00:05:41.193265785Z" level=info msg="RemoveContainer for \"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\" returns successfully" Sep 4 00:05:41.193434 kubelet[2783]: I0904 00:05:41.193408 2783 scope.go:117] "RemoveContainer" containerID="28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c" Sep 4 00:05:41.193617 containerd[1585]: time="2025-09-04T00:05:41.193572725Z" level=error msg="ContainerStatus for \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\": not found" Sep 4 00:05:41.193803 kubelet[2783]: E0904 00:05:41.193731 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\": not found" containerID="28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c" Sep 4 00:05:41.193803 kubelet[2783]: I0904 00:05:41.193759 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c"} err="failed to get container status \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\": rpc error: code = NotFound desc = an error occurred when try to find container \"28fd7a42119aeffa58f02050c4ef84e8dc72e11c74523b63a5cb7a5b5658658c\": not found" Sep 4 00:05:41.193803 kubelet[2783]: I0904 00:05:41.193790 2783 scope.go:117] "RemoveContainer" containerID="1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4" Sep 4 00:05:41.193964 containerd[1585]: time="2025-09-04T00:05:41.193932767Z" level=error msg="ContainerStatus for \"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\": not found" Sep 4 00:05:41.194036 kubelet[2783]: E0904 00:05:41.194016 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\": not found" containerID="1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4" Sep 4 00:05:41.194072 kubelet[2783]: I0904 00:05:41.194040 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4"} err="failed to get container status \"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\": rpc error: code = NotFound desc = an error occurred when try to find container \"1faa226bc8e93b513a1c61aad39b3d2b363047343f77ed77cbd052a4f4dafbe4\": not found" Sep 4 00:05:41.194072 kubelet[2783]: I0904 00:05:41.194062 2783 scope.go:117] "RemoveContainer" containerID="0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f" Sep 4 00:05:41.194268 containerd[1585]: time="2025-09-04T00:05:41.194224940Z" level=error msg="ContainerStatus for \"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\": not found" Sep 4 00:05:41.194409 kubelet[2783]: E0904 00:05:41.194381 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\": not found" containerID="0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f" Sep 4 00:05:41.194469 kubelet[2783]: I0904 00:05:41.194411 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f"} err="failed to get container status \"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e608243cf72d74d5223cf55dbc43ba295b1afc245e34ae30cd4fd1b8004de5f\": not found" Sep 4 00:05:41.194469 kubelet[2783]: I0904 00:05:41.194430 2783 scope.go:117] "RemoveContainer" containerID="46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702" Sep 4 00:05:41.194619 containerd[1585]: time="2025-09-04T00:05:41.194585532Z" level=error msg="ContainerStatus for \"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\": not found" Sep 4 00:05:41.194769 kubelet[2783]: E0904 00:05:41.194739 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\": not found" containerID="46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702" Sep 4 00:05:41.194769 kubelet[2783]: I0904 00:05:41.194763 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702"} err="failed to get container status \"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\": rpc error: code = NotFound desc = an error occurred when try to find container \"46159e8b879a073bb09a0832cf632300d2115b260c6c0bf909519b85445ce702\": not found" Sep 4 00:05:41.194862 kubelet[2783]: I0904 00:05:41.194777 2783 scope.go:117] "RemoveContainer" containerID="365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654" Sep 4 00:05:41.194941 containerd[1585]: time="2025-09-04T00:05:41.194910467Z" level=error msg="ContainerStatus for \"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\": not found" Sep 4 00:05:41.195053 kubelet[2783]: E0904 00:05:41.195028 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\": not found" containerID="365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654" Sep 4 00:05:41.195088 kubelet[2783]: I0904 00:05:41.195051 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654"} err="failed to get container status \"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\": rpc error: code = NotFound desc = an error occurred when try to find container \"365bcb6b53d69950ab0567f9010d25157d6d07224c485acdd12a5ef236815654\": not found" Sep 4 00:05:41.216652 systemd[1]: var-lib-kubelet-pods-ab33ad0f\x2d9561\x2d4a15\x2da7bd\x2dd964794c3b10-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxl7nt.mount: Deactivated successfully. Sep 4 00:05:41.216782 systemd[1]: var-lib-kubelet-pods-5bb845bd\x2da327\x2d4339\x2dbbb4\x2dcf32dba7a170-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsct69.mount: Deactivated successfully. Sep 4 00:05:41.216865 systemd[1]: var-lib-kubelet-pods-ab33ad0f\x2d9561\x2d4a15\x2da7bd\x2dd964794c3b10-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 00:05:41.216948 systemd[1]: var-lib-kubelet-pods-ab33ad0f\x2d9561\x2d4a15\x2da7bd\x2dd964794c3b10-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 00:05:41.948462 kubelet[2783]: E0904 00:05:41.948410 2783 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 00:05:42.111357 sshd[4381]: Connection closed by 10.0.0.1 port 44536 Sep 4 00:05:42.111933 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:42.124466 systemd[1]: sshd@26-10.0.0.105:22-10.0.0.1:44536.service: Deactivated successfully. Sep 4 00:05:42.126780 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 00:05:42.127702 systemd-logind[1569]: Session 27 logged out. Waiting for processes to exit. Sep 4 00:05:42.131123 systemd[1]: Started sshd@27-10.0.0.105:22-10.0.0.1:46954.service - OpenSSH per-connection server daemon (10.0.0.1:46954). Sep 4 00:05:42.132034 systemd-logind[1569]: Removed session 27. Sep 4 00:05:42.185711 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 46954 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:42.187290 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:42.192087 systemd-logind[1569]: New session 28 of user core. Sep 4 00:05:42.200880 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 00:05:42.201560 containerd[1585]: time="2025-09-04T00:05:42.201508109Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1756944340 nanos:272924968}" Sep 4 00:05:42.649450 sshd[4535]: Connection closed by 10.0.0.1 port 46954 Sep 4 00:05:42.651369 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:42.660559 systemd[1]: sshd@27-10.0.0.105:22-10.0.0.1:46954.service: Deactivated successfully. Sep 4 00:05:42.665314 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 00:05:42.667753 systemd-logind[1569]: Session 28 logged out. Waiting for processes to exit. Sep 4 00:05:42.673008 systemd[1]: Started sshd@28-10.0.0.105:22-10.0.0.1:46958.service - OpenSSH per-connection server daemon (10.0.0.1:46958). Sep 4 00:05:42.677093 kubelet[2783]: I0904 00:05:42.677038 2783 memory_manager.go:355] "RemoveStaleState removing state" podUID="5bb845bd-a327-4339-bbb4-cf32dba7a170" containerName="cilium-operator" Sep 4 00:05:42.677093 kubelet[2783]: I0904 00:05:42.677077 2783 memory_manager.go:355] "RemoveStaleState removing state" podUID="ab33ad0f-9561-4a15-a7bd-d964794c3b10" containerName="cilium-agent" Sep 4 00:05:42.679966 systemd-logind[1569]: Removed session 28. Sep 4 00:05:42.684031 kubelet[2783]: I0904 00:05:42.683981 2783 status_manager.go:890] "Failed to get status for pod" podUID="40ce9682-6bef-47a2-bcac-901cf04d11d7" pod="kube-system/cilium-g7ztc" err="pods \"cilium-g7ztc\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 4 00:05:42.696638 systemd[1]: Created slice kubepods-burstable-pod40ce9682_6bef_47a2_bcac_901cf04d11d7.slice - libcontainer container kubepods-burstable-pod40ce9682_6bef_47a2_bcac_901cf04d11d7.slice. Sep 4 00:05:42.739577 sshd[4548]: Accepted publickey for core from 10.0.0.1 port 46958 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:42.740788 sshd-session[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:42.746838 systemd-logind[1569]: New session 29 of user core. Sep 4 00:05:42.752854 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 00:05:42.753651 kubelet[2783]: I0904 00:05:42.753611 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40ce9682-6bef-47a2-bcac-901cf04d11d7-cilium-run\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754215 kubelet[2783]: I0904 00:05:42.753741 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40ce9682-6bef-47a2-bcac-901cf04d11d7-etc-cni-netd\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754215 kubelet[2783]: I0904 00:05:42.753766 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40ce9682-6bef-47a2-bcac-901cf04d11d7-xtables-lock\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754215 kubelet[2783]: I0904 00:05:42.753818 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40ce9682-6bef-47a2-bcac-901cf04d11d7-hubble-tls\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754215 kubelet[2783]: I0904 00:05:42.753837 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40ce9682-6bef-47a2-bcac-901cf04d11d7-clustermesh-secrets\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754215 kubelet[2783]: I0904 00:05:42.753893 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40ce9682-6bef-47a2-bcac-901cf04d11d7-cilium-config-path\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754215 kubelet[2783]: I0904 00:05:42.753909 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40ce9682-6bef-47a2-bcac-901cf04d11d7-lib-modules\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754388 kubelet[2783]: I0904 00:05:42.753927 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40ce9682-6bef-47a2-bcac-901cf04d11d7-host-proc-sys-net\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754388 kubelet[2783]: I0904 00:05:42.753972 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgpkp\" (UniqueName: \"kubernetes.io/projected/40ce9682-6bef-47a2-bcac-901cf04d11d7-kube-api-access-hgpkp\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754388 kubelet[2783]: I0904 00:05:42.753992 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40ce9682-6bef-47a2-bcac-901cf04d11d7-cni-path\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754388 kubelet[2783]: I0904 00:05:42.754059 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/40ce9682-6bef-47a2-bcac-901cf04d11d7-cilium-ipsec-secrets\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754388 kubelet[2783]: I0904 00:05:42.754077 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40ce9682-6bef-47a2-bcac-901cf04d11d7-hostproc\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754388 kubelet[2783]: I0904 00:05:42.754131 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40ce9682-6bef-47a2-bcac-901cf04d11d7-bpf-maps\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754517 kubelet[2783]: I0904 00:05:42.754149 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40ce9682-6bef-47a2-bcac-901cf04d11d7-host-proc-sys-kernel\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.754517 kubelet[2783]: I0904 00:05:42.754189 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40ce9682-6bef-47a2-bcac-901cf04d11d7-cilium-cgroup\") pod \"cilium-g7ztc\" (UID: \"40ce9682-6bef-47a2-bcac-901cf04d11d7\") " pod="kube-system/cilium-g7ztc" Sep 4 00:05:42.806578 sshd[4551]: Connection closed by 10.0.0.1 port 46958 Sep 4 00:05:42.807002 sshd-session[4548]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:42.825160 systemd[1]: sshd@28-10.0.0.105:22-10.0.0.1:46958.service: Deactivated successfully. Sep 4 00:05:42.827192 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 00:05:42.828200 systemd-logind[1569]: Session 29 logged out. Waiting for processes to exit. Sep 4 00:05:42.831560 systemd[1]: Started sshd@29-10.0.0.105:22-10.0.0.1:46960.service - OpenSSH per-connection server daemon (10.0.0.1:46960). Sep 4 00:05:42.832280 systemd-logind[1569]: Removed session 29. Sep 4 00:05:42.886405 kubelet[2783]: I0904 00:05:42.886356 2783 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bb845bd-a327-4339-bbb4-cf32dba7a170" path="/var/lib/kubelet/pods/5bb845bd-a327-4339-bbb4-cf32dba7a170/volumes" Sep 4 00:05:42.887068 kubelet[2783]: I0904 00:05:42.887036 2783 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab33ad0f-9561-4a15-a7bd-d964794c3b10" path="/var/lib/kubelet/pods/ab33ad0f-9561-4a15-a7bd-d964794c3b10/volumes" Sep 4 00:05:42.895767 sshd[4558]: Accepted publickey for core from 10.0.0.1 port 46960 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:42.897512 sshd-session[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:42.902205 systemd-logind[1569]: New session 30 of user core. Sep 4 00:05:42.911982 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 4 00:05:43.005220 kubelet[2783]: E0904 00:05:43.005153 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:43.006015 containerd[1585]: time="2025-09-04T00:05:43.005912343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g7ztc,Uid:40ce9682-6bef-47a2-bcac-901cf04d11d7,Namespace:kube-system,Attempt:0,}" Sep 4 00:05:43.205611 containerd[1585]: time="2025-09-04T00:05:43.205512881Z" level=info msg="connecting to shim f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440" address="unix:///run/containerd/s/e07a1acd3f46d5374a670524e2396b7ad5f760a1b63a096bcf7db435f574d2c5" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:05:43.247031 systemd[1]: Started cri-containerd-f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440.scope - libcontainer container f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440. Sep 4 00:05:43.293387 containerd[1585]: time="2025-09-04T00:05:43.293336401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g7ztc,Uid:40ce9682-6bef-47a2-bcac-901cf04d11d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440\"" Sep 4 00:05:43.294237 kubelet[2783]: E0904 00:05:43.294204 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:43.296867 containerd[1585]: time="2025-09-04T00:05:43.296806647Z" level=info msg="CreateContainer within sandbox \"f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 00:05:43.308098 containerd[1585]: time="2025-09-04T00:05:43.308010703Z" level=info msg="Container 654787a10db82c7d8000198b1cda81492ae272c8bb00759728348815e644ac0d: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:43.319060 containerd[1585]: time="2025-09-04T00:05:43.318987771Z" level=info msg="CreateContainer within sandbox \"f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"654787a10db82c7d8000198b1cda81492ae272c8bb00759728348815e644ac0d\"" Sep 4 00:05:43.319705 containerd[1585]: time="2025-09-04T00:05:43.319632871Z" level=info msg="StartContainer for \"654787a10db82c7d8000198b1cda81492ae272c8bb00759728348815e644ac0d\"" Sep 4 00:05:43.320936 containerd[1585]: time="2025-09-04T00:05:43.320901020Z" level=info msg="connecting to shim 654787a10db82c7d8000198b1cda81492ae272c8bb00759728348815e644ac0d" address="unix:///run/containerd/s/e07a1acd3f46d5374a670524e2396b7ad5f760a1b63a096bcf7db435f574d2c5" protocol=ttrpc version=3 Sep 4 00:05:43.359161 systemd[1]: Started cri-containerd-654787a10db82c7d8000198b1cda81492ae272c8bb00759728348815e644ac0d.scope - libcontainer container 654787a10db82c7d8000198b1cda81492ae272c8bb00759728348815e644ac0d. Sep 4 00:05:43.402536 containerd[1585]: time="2025-09-04T00:05:43.402463264Z" level=info msg="StartContainer for \"654787a10db82c7d8000198b1cda81492ae272c8bb00759728348815e644ac0d\" returns successfully" Sep 4 00:05:43.413144 systemd[1]: cri-containerd-654787a10db82c7d8000198b1cda81492ae272c8bb00759728348815e644ac0d.scope: Deactivated successfully. Sep 4 00:05:43.414743 containerd[1585]: time="2025-09-04T00:05:43.414669568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"654787a10db82c7d8000198b1cda81492ae272c8bb00759728348815e644ac0d\" id:\"654787a10db82c7d8000198b1cda81492ae272c8bb00759728348815e644ac0d\" pid:4629 exited_at:{seconds:1756944343 nanos:414276744}" Sep 4 00:05:43.414873 containerd[1585]: time="2025-09-04T00:05:43.414828207Z" level=info msg="received exit event container_id:\"654787a10db82c7d8000198b1cda81492ae272c8bb00759728348815e644ac0d\" id:\"654787a10db82c7d8000198b1cda81492ae272c8bb00759728348815e644ac0d\" pid:4629 exited_at:{seconds:1756944343 nanos:414276744}" Sep 4 00:05:44.141772 kubelet[2783]: E0904 00:05:44.141736 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:44.143927 containerd[1585]: time="2025-09-04T00:05:44.143875954Z" level=info msg="CreateContainer within sandbox \"f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 00:05:44.153097 containerd[1585]: time="2025-09-04T00:05:44.153029998Z" level=info msg="Container a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:44.157401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337098272.mount: Deactivated successfully. Sep 4 00:05:44.161901 containerd[1585]: time="2025-09-04T00:05:44.161859379Z" level=info msg="CreateContainer within sandbox \"f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444\"" Sep 4 00:05:44.162450 containerd[1585]: time="2025-09-04T00:05:44.162413798Z" level=info msg="StartContainer for \"a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444\"" Sep 4 00:05:44.163415 containerd[1585]: time="2025-09-04T00:05:44.163383111Z" level=info msg="connecting to shim a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444" address="unix:///run/containerd/s/e07a1acd3f46d5374a670524e2396b7ad5f760a1b63a096bcf7db435f574d2c5" protocol=ttrpc version=3 Sep 4 00:05:44.183905 systemd[1]: Started cri-containerd-a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444.scope - libcontainer container a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444. Sep 4 00:05:44.219705 containerd[1585]: time="2025-09-04T00:05:44.219645201Z" level=info msg="StartContainer for \"a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444\" returns successfully" Sep 4 00:05:44.227189 systemd[1]: cri-containerd-a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444.scope: Deactivated successfully. Sep 4 00:05:44.227707 containerd[1585]: time="2025-09-04T00:05:44.227531137Z" level=info msg="received exit event container_id:\"a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444\" id:\"a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444\" pid:4674 exited_at:{seconds:1756944344 nanos:227324095}" Sep 4 00:05:44.227936 containerd[1585]: time="2025-09-04T00:05:44.227850692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444\" id:\"a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444\" pid:4674 exited_at:{seconds:1756944344 nanos:227324095}" Sep 4 00:05:44.250750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1dd9e58e713362f5ec769c8ff5eaf240d7fd0070d8584302c856ecf0e418444-rootfs.mount: Deactivated successfully. Sep 4 00:05:45.145895 kubelet[2783]: E0904 00:05:45.145847 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:45.147574 containerd[1585]: time="2025-09-04T00:05:45.147523801Z" level=info msg="CreateContainer within sandbox \"f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 00:05:45.158755 containerd[1585]: time="2025-09-04T00:05:45.157727858Z" level=info msg="Container ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:45.167825 containerd[1585]: time="2025-09-04T00:05:45.167750813Z" level=info msg="CreateContainer within sandbox \"f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2\"" Sep 4 00:05:45.168402 containerd[1585]: time="2025-09-04T00:05:45.168340768Z" level=info msg="StartContainer for \"ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2\"" Sep 4 00:05:45.170173 containerd[1585]: time="2025-09-04T00:05:45.170141074Z" level=info msg="connecting to shim ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2" address="unix:///run/containerd/s/e07a1acd3f46d5374a670524e2396b7ad5f760a1b63a096bcf7db435f574d2c5" protocol=ttrpc version=3 Sep 4 00:05:45.194839 systemd[1]: Started cri-containerd-ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2.scope - libcontainer container ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2. Sep 4 00:05:45.239360 containerd[1585]: time="2025-09-04T00:05:45.239319659Z" level=info msg="StartContainer for \"ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2\" returns successfully" Sep 4 00:05:45.239649 systemd[1]: cri-containerd-ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2.scope: Deactivated successfully. Sep 4 00:05:45.241568 containerd[1585]: time="2025-09-04T00:05:45.241449547Z" level=info msg="received exit event container_id:\"ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2\" id:\"ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2\" pid:4718 exited_at:{seconds:1756944345 nanos:241247394}" Sep 4 00:05:45.241568 containerd[1585]: time="2025-09-04T00:05:45.241498400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2\" id:\"ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2\" pid:4718 exited_at:{seconds:1756944345 nanos:241247394}" Sep 4 00:05:45.264303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ead7422b5d4a17be20aeed59a278f603506c126f107569a88afacbf7f7102af2-rootfs.mount: Deactivated successfully. Sep 4 00:05:45.884169 kubelet[2783]: E0904 00:05:45.884111 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:46.150610 kubelet[2783]: E0904 00:05:46.150462 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:46.153426 containerd[1585]: time="2025-09-04T00:05:46.153388036Z" level=info msg="CreateContainer within sandbox \"f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 00:05:46.162716 containerd[1585]: time="2025-09-04T00:05:46.162083176Z" level=info msg="Container 102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:46.174361 containerd[1585]: time="2025-09-04T00:05:46.174302079Z" level=info msg="CreateContainer within sandbox \"f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94\"" Sep 4 00:05:46.174896 containerd[1585]: time="2025-09-04T00:05:46.174856608Z" level=info msg="StartContainer for \"102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94\"" Sep 4 00:05:46.175895 containerd[1585]: time="2025-09-04T00:05:46.175867780Z" level=info msg="connecting to shim 102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94" address="unix:///run/containerd/s/e07a1acd3f46d5374a670524e2396b7ad5f760a1b63a096bcf7db435f574d2c5" protocol=ttrpc version=3 Sep 4 00:05:46.197849 systemd[1]: Started cri-containerd-102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94.scope - libcontainer container 102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94. Sep 4 00:05:46.226243 systemd[1]: cri-containerd-102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94.scope: Deactivated successfully. Sep 4 00:05:46.227174 containerd[1585]: time="2025-09-04T00:05:46.227133872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94\" id:\"102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94\" pid:4757 exited_at:{seconds:1756944346 nanos:226416777}" Sep 4 00:05:46.434724 containerd[1585]: time="2025-09-04T00:05:46.434274068Z" level=info msg="received exit event container_id:\"102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94\" id:\"102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94\" pid:4757 exited_at:{seconds:1756944346 nanos:226416777}" Sep 4 00:05:46.442068 containerd[1585]: time="2025-09-04T00:05:46.442023741Z" level=info msg="StartContainer for \"102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94\" returns successfully" Sep 4 00:05:46.454440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-102e2b49c82a3583222a4e4d1384d8c0a0cde0839f87c6eb8723af4bf9597a94-rootfs.mount: Deactivated successfully. Sep 4 00:05:46.949278 kubelet[2783]: E0904 00:05:46.949208 2783 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 00:05:47.155584 kubelet[2783]: E0904 00:05:47.155538 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:47.157391 containerd[1585]: time="2025-09-04T00:05:47.157320113Z" level=info msg="CreateContainer within sandbox \"f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 00:05:47.168077 containerd[1585]: time="2025-09-04T00:05:47.167464490Z" level=info msg="Container 0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:47.179130 containerd[1585]: time="2025-09-04T00:05:47.179073716Z" level=info msg="CreateContainer within sandbox \"f70dc782b94fc3e28d933ffd9878a22a3f9587bedea50aebd974d888b36d5440\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c\"" Sep 4 00:05:47.179682 containerd[1585]: time="2025-09-04T00:05:47.179635208Z" level=info msg="StartContainer for \"0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c\"" Sep 4 00:05:47.180582 containerd[1585]: time="2025-09-04T00:05:47.180536512Z" level=info msg="connecting to shim 0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c" address="unix:///run/containerd/s/e07a1acd3f46d5374a670524e2396b7ad5f760a1b63a096bcf7db435f574d2c5" protocol=ttrpc version=3 Sep 4 00:05:47.209844 systemd[1]: Started cri-containerd-0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c.scope - libcontainer container 0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c. Sep 4 00:05:47.244962 containerd[1585]: time="2025-09-04T00:05:47.244920034Z" level=info msg="StartContainer for \"0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c\" returns successfully" Sep 4 00:05:47.314655 containerd[1585]: time="2025-09-04T00:05:47.314611111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c\" id:\"7a65c0e9699f7fbf260a09a6a252b567b2fef503679e4281206eb8d90da61db0\" pid:4825 exited_at:{seconds:1756944347 nanos:313950582}" Sep 4 00:05:47.675717 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 4 00:05:48.160557 kubelet[2783]: E0904 00:05:48.160520 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:48.314287 kubelet[2783]: I0904 00:05:48.314203 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g7ztc" podStartSLOduration=6.314186879 podStartE2EDuration="6.314186879s" podCreationTimestamp="2025-09-04 00:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:05:48.314162443 +0000 UTC m=+91.514625721" watchObservedRunningTime="2025-09-04 00:05:48.314186879 +0000 UTC m=+91.514650147" Sep 4 00:05:49.162516 kubelet[2783]: E0904 00:05:49.162461 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:49.279132 kubelet[2783]: I0904 00:05:49.278988 2783 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T00:05:49Z","lastTransitionTime":"2025-09-04T00:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 00:05:49.307521 containerd[1585]: time="2025-09-04T00:05:49.307440956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c\" id:\"d56eeb213a6badd42d0089db38da0f9e1654e82b96fee98326476c68f9b29fc3\" pid:4959 exit_status:1 exited_at:{seconds:1756944349 nanos:307030340}" Sep 4 00:05:50.886598 kubelet[2783]: E0904 00:05:50.886182 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:50.887401 systemd-networkd[1492]: lxc_health: Link UP Sep 4 00:05:50.889560 systemd-networkd[1492]: lxc_health: Gained carrier Sep 4 00:05:51.007844 kubelet[2783]: E0904 00:05:51.007182 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:51.166793 kubelet[2783]: E0904 00:05:51.166449 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:51.468240 containerd[1585]: time="2025-09-04T00:05:51.468163258Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c\" id:\"c771c8fc97ba43a322e3fa03b24362ee124f71e767bcebb32f7eaeff62b711b1\" pid:5355 exited_at:{seconds:1756944351 nanos:467739838}" Sep 4 00:05:52.167881 kubelet[2783]: E0904 00:05:52.167835 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:52.467123 systemd-networkd[1492]: lxc_health: Gained IPv6LL Sep 4 00:05:53.562368 containerd[1585]: time="2025-09-04T00:05:53.562318536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c\" id:\"03a6941bb4e8d96c80e28c7698b9fe0392032d944105223bdb508ee9bd1c6d66\" pid:5395 exited_at:{seconds:1756944353 nanos:562010204}" Sep 4 00:05:53.883774 kubelet[2783]: E0904 00:05:53.883622 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:55.652808 containerd[1585]: time="2025-09-04T00:05:55.652753999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c\" id:\"2cb22192efaf1b75a8b3db202dffcef87b323c45c374fb75cab6b4da22d75568\" pid:5425 exited_at:{seconds:1756944355 nanos:652245269}" Sep 4 00:05:57.833236 containerd[1585]: time="2025-09-04T00:05:57.833105493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c7a1fdc41ebc8961a8cb2d5837fc1df872be13e57560b2ab7c8a11caaa3385c\" id:\"bd7341fa97fd4b2ab4fa3b874913b4427a99e4627dc3e35f776bb776b3c45923\" pid:5449 exited_at:{seconds:1756944357 nanos:832734372}" Sep 4 00:05:57.846451 sshd[4564]: Connection closed by 10.0.0.1 port 46960 Sep 4 00:05:57.846937 sshd-session[4558]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:57.852130 systemd[1]: sshd@29-10.0.0.105:22-10.0.0.1:46960.service: Deactivated successfully. Sep 4 00:05:57.854109 systemd[1]: session-30.scope: Deactivated successfully. Sep 4 00:05:57.854881 systemd-logind[1569]: Session 30 logged out. Waiting for processes to exit. Sep 4 00:05:57.856183 systemd-logind[1569]: Removed session 30.