Oct 28 13:07:07.984195 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 28 11:22:35 -00 2025 Oct 28 13:07:07.984219 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3b5773c335d9782dd41351ceb8da09cfd1ec290db8d35827245f7b6eed48895b Oct 28 13:07:07.984231 kernel: BIOS-provided physical RAM map: Oct 28 13:07:07.984238 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 28 13:07:07.984245 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 28 13:07:07.984252 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 28 13:07:07.984260 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 28 13:07:07.984267 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 28 13:07:07.984276 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 28 13:07:07.984285 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 28 13:07:07.984292 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 28 13:07:07.984299 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 28 13:07:07.984305 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 28 13:07:07.984312 kernel: NX (Execute Disable) protection: active Oct 28 13:07:07.984323 kernel: APIC: Static calls initialized Oct 28 13:07:07.984330 kernel: SMBIOS 2.8 present. Oct 28 13:07:07.984340 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 28 13:07:07.984348 kernel: DMI: Memory slots populated: 1/1 Oct 28 13:07:07.984355 kernel: Hypervisor detected: KVM Oct 28 13:07:07.984362 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 28 13:07:07.984370 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 28 13:07:07.984377 kernel: kvm-clock: using sched offset of 3774685242 cycles Oct 28 13:07:07.984385 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 28 13:07:07.984393 kernel: tsc: Detected 2794.748 MHz processor Oct 28 13:07:07.984404 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 28 13:07:07.984412 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 28 13:07:07.984420 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 28 13:07:07.984428 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 28 13:07:07.984436 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 28 13:07:07.984444 kernel: Using GB pages for direct mapping Oct 28 13:07:07.984452 kernel: ACPI: Early table checksum verification disabled Oct 28 13:07:07.984462 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 28 13:07:07.984470 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:07:07.984478 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:07:07.984485 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:07:07.984493 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 28 13:07:07.984501 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:07:07.984509 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:07:07.984519 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:07:07.984527 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:07:07.984538 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 28 13:07:07.984546 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 28 13:07:07.984554 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 28 13:07:07.984564 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 28 13:07:07.984572 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 28 13:07:07.984580 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 28 13:07:07.984588 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 28 13:07:07.984595 kernel: No NUMA configuration found Oct 28 13:07:07.984603 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 28 13:07:07.984614 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 28 13:07:07.984622 kernel: Zone ranges: Oct 28 13:07:07.984630 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 28 13:07:07.984637 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 28 13:07:07.984645 kernel: Normal empty Oct 28 13:07:07.984653 kernel: Device empty Oct 28 13:07:07.984661 kernel: Movable zone start for each node Oct 28 13:07:07.984668 kernel: Early memory node ranges Oct 28 13:07:07.984678 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 28 13:07:07.984686 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 28 13:07:07.984694 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 28 13:07:07.984702 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 28 13:07:07.984713 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 28 13:07:07.984721 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 28 13:07:07.984735 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 28 13:07:07.984748 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 28 13:07:07.984762 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 28 13:07:07.984772 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 28 13:07:07.984803 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 28 13:07:07.984833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 28 13:07:07.984845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 28 13:07:07.984865 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 28 13:07:07.984875 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 28 13:07:07.984888 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 28 13:07:07.984897 kernel: TSC deadline timer available Oct 28 13:07:07.984905 kernel: CPU topo: Max. logical packages: 1 Oct 28 13:07:07.984913 kernel: CPU topo: Max. logical dies: 1 Oct 28 13:07:07.984920 kernel: CPU topo: Max. dies per package: 1 Oct 28 13:07:07.984928 kernel: CPU topo: Max. threads per core: 1 Oct 28 13:07:07.984936 kernel: CPU topo: Num. cores per package: 4 Oct 28 13:07:07.984944 kernel: CPU topo: Num. threads per package: 4 Oct 28 13:07:07.984954 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 28 13:07:07.984962 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 28 13:07:07.984970 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 28 13:07:07.984978 kernel: kvm-guest: setup PV sched yield Oct 28 13:07:07.984986 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 28 13:07:07.984994 kernel: Booting paravirtualized kernel on KVM Oct 28 13:07:07.985003 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 28 13:07:07.985013 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 28 13:07:07.985021 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 28 13:07:07.985029 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 28 13:07:07.985037 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 28 13:07:07.985045 kernel: kvm-guest: PV spinlocks enabled Oct 28 13:07:07.985053 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 28 13:07:07.985062 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3b5773c335d9782dd41351ceb8da09cfd1ec290db8d35827245f7b6eed48895b Oct 28 13:07:07.985073 kernel: random: crng init done Oct 28 13:07:07.985081 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 28 13:07:07.985089 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 28 13:07:07.985097 kernel: Fallback order for Node 0: 0 Oct 28 13:07:07.985105 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 28 13:07:07.985113 kernel: Policy zone: DMA32 Oct 28 13:07:07.985121 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 28 13:07:07.985131 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 28 13:07:07.985139 kernel: ftrace: allocating 40092 entries in 157 pages Oct 28 13:07:07.985147 kernel: ftrace: allocated 157 pages with 5 groups Oct 28 13:07:07.985155 kernel: Dynamic Preempt: voluntary Oct 28 13:07:07.985163 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 28 13:07:07.985175 kernel: rcu: RCU event tracing is enabled. Oct 28 13:07:07.985183 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 28 13:07:07.985193 kernel: Trampoline variant of Tasks RCU enabled. Oct 28 13:07:07.985204 kernel: Rude variant of Tasks RCU enabled. Oct 28 13:07:07.985212 kernel: Tracing variant of Tasks RCU enabled. Oct 28 13:07:07.985220 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 28 13:07:07.985228 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 28 13:07:07.985236 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 13:07:07.985244 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 13:07:07.985252 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 13:07:07.985263 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 28 13:07:07.985272 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 28 13:07:07.985286 kernel: Console: colour VGA+ 80x25 Oct 28 13:07:07.985296 kernel: printk: legacy console [ttyS0] enabled Oct 28 13:07:07.985305 kernel: ACPI: Core revision 20240827 Oct 28 13:07:07.985313 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 28 13:07:07.985321 kernel: APIC: Switch to symmetric I/O mode setup Oct 28 13:07:07.985330 kernel: x2apic enabled Oct 28 13:07:07.985338 kernel: APIC: Switched APIC routing to: physical x2apic Oct 28 13:07:07.985351 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 28 13:07:07.985359 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 28 13:07:07.985368 kernel: kvm-guest: setup PV IPIs Oct 28 13:07:07.985376 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 28 13:07:07.985386 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 28 13:07:07.985395 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 28 13:07:07.985403 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 28 13:07:07.985411 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 28 13:07:07.985420 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 28 13:07:07.985428 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 28 13:07:07.985436 kernel: Spectre V2 : Mitigation: Retpolines Oct 28 13:07:07.985447 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 28 13:07:07.985455 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 28 13:07:07.985464 kernel: active return thunk: retbleed_return_thunk Oct 28 13:07:07.985472 kernel: RETBleed: Mitigation: untrained return thunk Oct 28 13:07:07.985480 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 28 13:07:07.985489 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 28 13:07:07.985497 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 28 13:07:07.985508 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 28 13:07:07.985516 kernel: active return thunk: srso_return_thunk Oct 28 13:07:07.985525 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 28 13:07:07.985533 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 28 13:07:07.985541 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 28 13:07:07.985550 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 28 13:07:07.985558 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 28 13:07:07.985569 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 28 13:07:07.985577 kernel: Freeing SMP alternatives memory: 32K Oct 28 13:07:07.985585 kernel: pid_max: default: 32768 minimum: 301 Oct 28 13:07:07.985594 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 28 13:07:07.985602 kernel: landlock: Up and running. Oct 28 13:07:07.985610 kernel: SELinux: Initializing. Oct 28 13:07:07.985621 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 13:07:07.985632 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 13:07:07.985640 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 28 13:07:07.985648 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 28 13:07:07.985657 kernel: ... version: 0 Oct 28 13:07:07.985665 kernel: ... bit width: 48 Oct 28 13:07:07.985673 kernel: ... generic registers: 6 Oct 28 13:07:07.985682 kernel: ... value mask: 0000ffffffffffff Oct 28 13:07:07.985692 kernel: ... max period: 00007fffffffffff Oct 28 13:07:07.985700 kernel: ... fixed-purpose events: 0 Oct 28 13:07:07.985709 kernel: ... event mask: 000000000000003f Oct 28 13:07:07.985717 kernel: signal: max sigframe size: 1776 Oct 28 13:07:07.985725 kernel: rcu: Hierarchical SRCU implementation. Oct 28 13:07:07.985734 kernel: rcu: Max phase no-delay instances is 400. Oct 28 13:07:07.985742 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 28 13:07:07.985756 kernel: smp: Bringing up secondary CPUs ... Oct 28 13:07:07.985769 kernel: smpboot: x86: Booting SMP configuration: Oct 28 13:07:07.985806 kernel: .... node #0, CPUs: #1 #2 #3 Oct 28 13:07:07.985819 kernel: smp: Brought up 1 node, 4 CPUs Oct 28 13:07:07.985829 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 28 13:07:07.985838 kernel: Memory: 2451440K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15960K init, 2084K bss, 114376K reserved, 0K cma-reserved) Oct 28 13:07:07.985846 kernel: devtmpfs: initialized Oct 28 13:07:07.985866 kernel: x86/mm: Memory block size: 128MB Oct 28 13:07:07.985875 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 28 13:07:07.985884 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 28 13:07:07.985892 kernel: pinctrl core: initialized pinctrl subsystem Oct 28 13:07:07.985900 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 28 13:07:07.985909 kernel: audit: initializing netlink subsys (disabled) Oct 28 13:07:07.985918 kernel: audit: type=2000 audit(1761656823.941:1): state=initialized audit_enabled=0 res=1 Oct 28 13:07:07.985928 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 28 13:07:07.985937 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 28 13:07:07.985945 kernel: cpuidle: using governor menu Oct 28 13:07:07.985953 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 28 13:07:07.985961 kernel: dca service started, version 1.12.1 Oct 28 13:07:07.985970 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 28 13:07:07.985978 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 28 13:07:07.985989 kernel: PCI: Using configuration type 1 for base access Oct 28 13:07:07.985997 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 28 13:07:07.986005 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 28 13:07:07.986014 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 28 13:07:07.986023 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 28 13:07:07.986038 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 28 13:07:07.986052 kernel: ACPI: Added _OSI(Module Device) Oct 28 13:07:07.986067 kernel: ACPI: Added _OSI(Processor Device) Oct 28 13:07:07.986078 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 28 13:07:07.986090 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 28 13:07:07.986102 kernel: ACPI: Interpreter enabled Oct 28 13:07:07.986110 kernel: ACPI: PM: (supports S0 S3 S5) Oct 28 13:07:07.986118 kernel: ACPI: Using IOAPIC for interrupt routing Oct 28 13:07:07.986127 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 28 13:07:07.986135 kernel: PCI: Using E820 reservations for host bridge windows Oct 28 13:07:07.986147 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 28 13:07:07.986155 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 28 13:07:07.986382 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 28 13:07:07.986557 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 28 13:07:07.986727 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 28 13:07:07.986742 kernel: PCI host bridge to bus 0000:00 Oct 28 13:07:07.986985 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 28 13:07:07.987161 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 28 13:07:07.987347 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 28 13:07:07.987541 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 28 13:07:07.987735 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 28 13:07:07.987959 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 28 13:07:07.988152 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 28 13:07:07.988388 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 28 13:07:07.988610 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 28 13:07:07.988841 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 28 13:07:07.989085 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 28 13:07:07.989295 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 28 13:07:07.989501 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 28 13:07:07.989721 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 28 13:07:07.989976 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 28 13:07:07.990190 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 28 13:07:07.990404 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 28 13:07:07.990625 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 28 13:07:07.990881 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 28 13:07:07.991110 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 28 13:07:07.991320 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 28 13:07:07.991541 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 28 13:07:07.991758 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 28 13:07:07.991990 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 28 13:07:07.992210 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 28 13:07:07.992431 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 28 13:07:07.992648 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 28 13:07:07.992911 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 28 13:07:07.993143 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 28 13:07:07.993360 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 28 13:07:07.993567 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 28 13:07:07.993796 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 28 13:07:07.994022 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 28 13:07:07.994044 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 28 13:07:07.994056 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 28 13:07:07.994068 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 28 13:07:07.994084 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 28 13:07:07.994097 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 28 13:07:07.994109 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 28 13:07:07.994120 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 28 13:07:07.994136 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 28 13:07:07.994148 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 28 13:07:07.994161 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 28 13:07:07.994172 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 28 13:07:07.994184 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 28 13:07:07.994195 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 28 13:07:07.994207 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 28 13:07:07.994222 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 28 13:07:07.994234 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 28 13:07:07.994246 kernel: iommu: Default domain type: Translated Oct 28 13:07:07.994258 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 28 13:07:07.994269 kernel: PCI: Using ACPI for IRQ routing Oct 28 13:07:07.994280 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 28 13:07:07.994292 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 28 13:07:07.994308 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 28 13:07:07.994506 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 28 13:07:07.994691 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 28 13:07:07.994933 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 28 13:07:07.994951 kernel: vgaarb: loaded Oct 28 13:07:07.994965 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 28 13:07:07.994978 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 28 13:07:07.994996 kernel: clocksource: Switched to clocksource kvm-clock Oct 28 13:07:07.995008 kernel: VFS: Disk quotas dquot_6.6.0 Oct 28 13:07:07.995020 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 28 13:07:07.995032 kernel: pnp: PnP ACPI init Oct 28 13:07:07.995265 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 28 13:07:07.995283 kernel: pnp: PnP ACPI: found 6 devices Oct 28 13:07:07.995300 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 28 13:07:07.995312 kernel: NET: Registered PF_INET protocol family Oct 28 13:07:07.995324 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 28 13:07:07.995336 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 28 13:07:07.995347 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 28 13:07:07.995359 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 28 13:07:07.995370 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 28 13:07:07.995384 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 28 13:07:07.995396 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 13:07:07.995407 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 13:07:07.995419 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 28 13:07:07.995431 kernel: NET: Registered PF_XDP protocol family Oct 28 13:07:07.995620 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 28 13:07:07.995828 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 28 13:07:07.996037 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 28 13:07:07.996215 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 28 13:07:07.996402 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 28 13:07:07.996589 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 28 13:07:07.996603 kernel: PCI: CLS 0 bytes, default 64 Oct 28 13:07:07.996612 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 28 13:07:07.996621 kernel: Initialise system trusted keyrings Oct 28 13:07:07.996634 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 28 13:07:07.996643 kernel: Key type asymmetric registered Oct 28 13:07:07.996651 kernel: Asymmetric key parser 'x509' registered Oct 28 13:07:07.996662 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 28 13:07:07.996674 kernel: io scheduler mq-deadline registered Oct 28 13:07:07.996686 kernel: io scheduler kyber registered Oct 28 13:07:07.996699 kernel: io scheduler bfq registered Oct 28 13:07:07.996714 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 28 13:07:07.996727 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 28 13:07:07.996739 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 28 13:07:07.996751 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 28 13:07:07.996764 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 28 13:07:07.996776 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 28 13:07:07.996816 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 28 13:07:07.996832 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 28 13:07:07.996844 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 28 13:07:07.996865 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 28 13:07:07.997089 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 28 13:07:07.997288 kernel: rtc_cmos 00:04: registered as rtc0 Oct 28 13:07:07.997486 kernel: rtc_cmos 00:04: setting system clock to 2025-10-28T13:07:05 UTC (1761656825) Oct 28 13:07:07.997684 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 28 13:07:07.997704 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 28 13:07:07.997715 kernel: NET: Registered PF_INET6 protocol family Oct 28 13:07:07.997726 kernel: Segment Routing with IPv6 Oct 28 13:07:07.997736 kernel: In-situ OAM (IOAM) with IPv6 Oct 28 13:07:07.997747 kernel: NET: Registered PF_PACKET protocol family Oct 28 13:07:07.997758 kernel: Key type dns_resolver registered Oct 28 13:07:07.997768 kernel: IPI shorthand broadcast: enabled Oct 28 13:07:07.997812 kernel: sched_clock: Marking stable (1261002753, 204325336)->(1515507026, -50178937) Oct 28 13:07:07.997823 kernel: registered taskstats version 1 Oct 28 13:07:07.997834 kernel: Loading compiled-in X.509 certificates Oct 28 13:07:07.997845 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: cdff28e8ecdc0a80eff4a5776c5a29d2ceff67c8' Oct 28 13:07:07.997863 kernel: Demotion targets for Node 0: null Oct 28 13:07:07.997873 kernel: Key type .fscrypt registered Oct 28 13:07:07.997884 kernel: Key type fscrypt-provisioning registered Oct 28 13:07:07.997897 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 28 13:07:07.997908 kernel: ima: Allocated hash algorithm: sha1 Oct 28 13:07:07.997919 kernel: ima: No architecture policies found Oct 28 13:07:07.997931 kernel: clk: Disabling unused clocks Oct 28 13:07:07.997943 kernel: Freeing unused kernel image (initmem) memory: 15960K Oct 28 13:07:07.997956 kernel: Write protecting the kernel read-only data: 40960k Oct 28 13:07:07.997970 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 28 13:07:07.997983 kernel: Run /init as init process Oct 28 13:07:07.997993 kernel: with arguments: Oct 28 13:07:07.998004 kernel: /init Oct 28 13:07:07.998014 kernel: with environment: Oct 28 13:07:07.998025 kernel: HOME=/ Oct 28 13:07:07.998035 kernel: TERM=linux Oct 28 13:07:07.998045 kernel: SCSI subsystem initialized Oct 28 13:07:07.998057 kernel: libata version 3.00 loaded. Oct 28 13:07:07.998253 kernel: ahci 0000:00:1f.2: version 3.0 Oct 28 13:07:07.998289 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 28 13:07:07.998501 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 28 13:07:07.998718 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 28 13:07:07.998997 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 28 13:07:07.999241 kernel: scsi host0: ahci Oct 28 13:07:07.999481 kernel: scsi host1: ahci Oct 28 13:07:07.999718 kernel: scsi host2: ahci Oct 28 13:07:07.999988 kernel: scsi host3: ahci Oct 28 13:07:08.000209 kernel: scsi host4: ahci Oct 28 13:07:08.000440 kernel: scsi host5: ahci Oct 28 13:07:08.000460 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Oct 28 13:07:08.000477 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Oct 28 13:07:08.000489 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Oct 28 13:07:08.000502 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Oct 28 13:07:08.000514 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Oct 28 13:07:08.000526 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Oct 28 13:07:08.000540 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 28 13:07:08.000553 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 28 13:07:08.000565 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 28 13:07:08.000578 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 28 13:07:08.000590 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 28 13:07:08.000603 kernel: ata3.00: LPM support broken, forcing max_power Oct 28 13:07:08.000615 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 28 13:07:08.000630 kernel: ata3.00: applying bridge limits Oct 28 13:07:08.000642 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 28 13:07:08.000655 kernel: ata3.00: LPM support broken, forcing max_power Oct 28 13:07:08.000666 kernel: ata3.00: configured for UDMA/100 Oct 28 13:07:08.000953 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 28 13:07:08.001195 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 28 13:07:08.001417 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 28 13:07:08.001436 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 28 13:07:08.001449 kernel: GPT:16515071 != 27000831 Oct 28 13:07:08.001461 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 28 13:07:08.001473 kernel: GPT:16515071 != 27000831 Oct 28 13:07:08.001484 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 28 13:07:08.001496 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 28 13:07:08.001509 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:07:08.001705 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 28 13:07:08.001718 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 28 13:07:08.001955 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 28 13:07:08.001970 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 28 13:07:08.001980 kernel: device-mapper: uevent: version 1.0.3 Oct 28 13:07:08.001989 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 28 13:07:08.002003 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 28 13:07:08.002014 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:07:08.002023 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:07:08.002031 kernel: raid6: avx2x4 gen() 30298 MB/s Oct 28 13:07:08.002042 kernel: raid6: avx2x2 gen() 30798 MB/s Oct 28 13:07:08.002050 kernel: raid6: avx2x1 gen() 25690 MB/s Oct 28 13:07:08.002059 kernel: raid6: using algorithm avx2x2 gen() 30798 MB/s Oct 28 13:07:08.002068 kernel: raid6: .... xor() 13770 MB/s, rmw enabled Oct 28 13:07:08.002077 kernel: raid6: using avx2x2 recovery algorithm Oct 28 13:07:08.002085 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:07:08.002094 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:07:08.002102 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:07:08.002113 kernel: xor: automatically using best checksumming function avx Oct 28 13:07:08.002123 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:07:08.002135 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 28 13:07:08.002150 kernel: BTRFS: device fsid af35db37-e08e-4bd7-9f3a-b576d01d2613 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (176) Oct 28 13:07:08.002162 kernel: BTRFS info (device dm-0): first mount of filesystem af35db37-e08e-4bd7-9f3a-b576d01d2613 Oct 28 13:07:08.002175 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 28 13:07:08.002187 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 28 13:07:08.002203 kernel: BTRFS info (device dm-0): enabling free space tree Oct 28 13:07:08.002216 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:07:08.002228 kernel: loop: module loaded Oct 28 13:07:08.002241 kernel: loop0: detected capacity change from 0 to 100120 Oct 28 13:07:08.002253 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 28 13:07:08.002267 systemd[1]: Successfully made /usr/ read-only. Oct 28 13:07:08.002283 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 13:07:08.002300 systemd[1]: Detected virtualization kvm. Oct 28 13:07:08.002313 systemd[1]: Detected architecture x86-64. Oct 28 13:07:08.002325 systemd[1]: Running in initrd. Oct 28 13:07:08.002338 systemd[1]: No hostname configured, using default hostname. Oct 28 13:07:08.002352 systemd[1]: Hostname set to . Oct 28 13:07:08.002366 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 13:07:08.002382 systemd[1]: Queued start job for default target initrd.target. Oct 28 13:07:08.002395 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 13:07:08.002409 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 13:07:08.002425 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 13:07:08.002440 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 28 13:07:08.002453 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 13:07:08.002470 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 28 13:07:08.002484 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 28 13:07:08.002497 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 13:07:08.002510 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 13:07:08.002523 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 28 13:07:08.002536 systemd[1]: Reached target paths.target - Path Units. Oct 28 13:07:08.002553 systemd[1]: Reached target slices.target - Slice Units. Oct 28 13:07:08.002566 systemd[1]: Reached target swap.target - Swaps. Oct 28 13:07:08.002580 systemd[1]: Reached target timers.target - Timer Units. Oct 28 13:07:08.002593 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 13:07:08.002607 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 13:07:08.002621 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 28 13:07:08.002634 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 28 13:07:08.002650 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 13:07:08.002664 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 13:07:08.002678 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 13:07:08.002692 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 13:07:08.002705 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 28 13:07:08.002719 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 28 13:07:08.002733 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 13:07:08.002749 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 28 13:07:08.002763 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 28 13:07:08.002777 systemd[1]: Starting systemd-fsck-usr.service... Oct 28 13:07:08.002806 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 13:07:08.002819 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 13:07:08.002832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:07:08.002850 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 28 13:07:08.002871 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 13:07:08.002884 systemd[1]: Finished systemd-fsck-usr.service. Oct 28 13:07:08.002897 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 13:07:08.002955 systemd-journald[314]: Collecting audit messages is disabled. Oct 28 13:07:08.002985 systemd-journald[314]: Journal started Oct 28 13:07:08.003017 systemd-journald[314]: Runtime Journal (/run/log/journal/5c22daffa71543c195098f70d9055782) is 6M, max 48.3M, 42.2M free. Oct 28 13:07:08.007820 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 13:07:08.014088 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 13:07:08.086433 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 28 13:07:08.086464 kernel: Bridge firewalling registered Oct 28 13:07:08.019547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 13:07:08.021483 systemd-modules-load[315]: Inserted module 'br_netfilter' Oct 28 13:07:08.087893 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 13:07:08.093529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 13:07:08.100715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:07:08.120764 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 28 13:07:08.125406 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 13:07:08.133539 systemd-tmpfiles[330]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 28 13:07:08.133909 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 13:07:08.141010 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 13:07:08.145427 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 13:07:08.148056 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 13:07:08.166022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 13:07:08.168958 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 28 13:07:08.198009 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3b5773c335d9782dd41351ceb8da09cfd1ec290db8d35827245f7b6eed48895b Oct 28 13:07:08.221487 systemd-resolved[343]: Positive Trust Anchors: Oct 28 13:07:08.221502 systemd-resolved[343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 13:07:08.221506 systemd-resolved[343]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 13:07:08.221537 systemd-resolved[343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 13:07:08.245224 systemd-resolved[343]: Defaulting to hostname 'linux'. Oct 28 13:07:08.246767 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 13:07:08.247475 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 13:07:08.320826 kernel: Loading iSCSI transport class v2.0-870. Oct 28 13:07:08.335829 kernel: iscsi: registered transport (tcp) Oct 28 13:07:08.366820 kernel: iscsi: registered transport (qla4xxx) Oct 28 13:07:08.366869 kernel: QLogic iSCSI HBA Driver Oct 28 13:07:08.395800 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 13:07:08.450002 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 13:07:08.451735 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 13:07:08.513457 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 28 13:07:08.517024 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 28 13:07:08.518512 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 28 13:07:08.558337 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 28 13:07:08.560962 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 13:07:08.591427 systemd-udevd[592]: Using default interface naming scheme 'v257'. Oct 28 13:07:08.606143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 13:07:08.612587 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 28 13:07:08.644444 dracut-pre-trigger[657]: rd.md=0: removing MD RAID activation Oct 28 13:07:08.651079 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 13:07:08.654006 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 13:07:08.700580 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 13:07:08.712245 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 13:07:08.726995 systemd-networkd[709]: lo: Link UP Oct 28 13:07:08.727003 systemd-networkd[709]: lo: Gained carrier Oct 28 13:07:08.727556 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 13:07:08.730057 systemd[1]: Reached target network.target - Network. Oct 28 13:07:09.102086 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 13:07:09.107050 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 28 13:07:09.164295 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 28 13:07:09.166493 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 28 13:07:09.186926 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 28 13:07:09.204399 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 13:07:09.217818 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 28 13:07:09.222261 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 28 13:07:09.225332 kernel: cryptd: max_cpu_qlen set to 1000 Oct 28 13:07:09.227232 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 13:07:09.231846 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 13:07:09.233874 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 13:07:09.238347 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 28 13:07:09.249816 kernel: AES CTR mode by8 optimization enabled Oct 28 13:07:09.251495 systemd-networkd[709]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 13:07:09.251507 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 13:07:09.251961 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 28 13:07:09.254209 systemd-networkd[709]: eth0: Link UP Oct 28 13:07:09.254424 systemd-networkd[709]: eth0: Gained carrier Oct 28 13:07:09.254434 systemd-networkd[709]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 13:07:09.261997 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 13:07:09.262119 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:07:09.262745 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:07:09.278739 systemd-networkd[709]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 13:07:09.284519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:07:09.300924 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 28 13:07:09.337221 disk-uuid[811]: Primary Header is updated. Oct 28 13:07:09.337221 disk-uuid[811]: Secondary Entries is updated. Oct 28 13:07:09.337221 disk-uuid[811]: Secondary Header is updated. Oct 28 13:07:09.413613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:07:09.613447 systemd-resolved[343]: Detected conflict on linux IN A 10.0.0.28 Oct 28 13:07:09.613468 systemd-resolved[343]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Oct 28 13:07:10.383331 disk-uuid[846]: Warning: The kernel is still using the old partition table. Oct 28 13:07:10.383331 disk-uuid[846]: The new table will be used at the next reboot or after you Oct 28 13:07:10.383331 disk-uuid[846]: run partprobe(8) or kpartx(8) Oct 28 13:07:10.383331 disk-uuid[846]: The operation has completed successfully. Oct 28 13:07:10.396526 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 28 13:07:10.396737 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 28 13:07:10.398578 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 28 13:07:10.450835 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (857) Oct 28 13:07:10.454239 kernel: BTRFS info (device vda6): first mount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:07:10.454272 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 13:07:10.458321 kernel: BTRFS info (device vda6): turning on async discard Oct 28 13:07:10.458346 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 13:07:10.466818 kernel: BTRFS info (device vda6): last unmount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:07:10.467722 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 28 13:07:10.470623 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 28 13:07:10.757538 ignition[876]: Ignition 2.22.0 Oct 28 13:07:10.757551 ignition[876]: Stage: fetch-offline Oct 28 13:07:10.757591 ignition[876]: no configs at "/usr/lib/ignition/base.d" Oct 28 13:07:10.757602 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:07:10.757707 ignition[876]: parsed url from cmdline: "" Oct 28 13:07:10.757711 ignition[876]: no config URL provided Oct 28 13:07:10.757716 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Oct 28 13:07:10.757726 ignition[876]: no config at "/usr/lib/ignition/user.ign" Oct 28 13:07:10.757767 ignition[876]: op(1): [started] loading QEMU firmware config module Oct 28 13:07:10.757772 ignition[876]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 28 13:07:10.769869 ignition[876]: op(1): [finished] loading QEMU firmware config module Oct 28 13:07:10.848015 ignition[876]: parsing config with SHA512: 9c720fdb40acbfb8612a7e7326a745944966da3bb09307299889cc3796401ed780d8bbbc77a794ba3dd7622449125d7168bf5551ad42ee73c706822609293f95 Oct 28 13:07:10.853429 unknown[876]: fetched base config from "system" Oct 28 13:07:10.853442 unknown[876]: fetched user config from "qemu" Oct 28 13:07:10.856454 ignition[876]: fetch-offline: fetch-offline passed Oct 28 13:07:10.857850 ignition[876]: Ignition finished successfully Oct 28 13:07:10.857914 systemd-networkd[709]: eth0: Gained IPv6LL Oct 28 13:07:10.863487 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 13:07:10.864467 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 28 13:07:10.865808 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 28 13:07:10.923691 ignition[886]: Ignition 2.22.0 Oct 28 13:07:10.923718 ignition[886]: Stage: kargs Oct 28 13:07:10.924079 ignition[886]: no configs at "/usr/lib/ignition/base.d" Oct 28 13:07:10.924100 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:07:10.925253 ignition[886]: kargs: kargs passed Oct 28 13:07:10.925309 ignition[886]: Ignition finished successfully Oct 28 13:07:10.935067 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 28 13:07:10.939450 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 28 13:07:10.990461 ignition[895]: Ignition 2.22.0 Oct 28 13:07:10.990477 ignition[895]: Stage: disks Oct 28 13:07:10.990668 ignition[895]: no configs at "/usr/lib/ignition/base.d" Oct 28 13:07:10.990682 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:07:10.994806 ignition[895]: disks: disks passed Oct 28 13:07:10.994859 ignition[895]: Ignition finished successfully Oct 28 13:07:11.000775 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 28 13:07:11.004083 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 28 13:07:11.004702 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 28 13:07:11.008109 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 13:07:11.008647 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 13:07:11.014803 systemd[1]: Reached target basic.target - Basic System. Oct 28 13:07:11.019170 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 28 13:07:11.063589 systemd-fsck[905]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 28 13:07:11.072101 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 28 13:07:11.073830 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 28 13:07:11.188822 kernel: EXT4-fs (vda9): mounted filesystem 533620cd-204e-4567-a68e-d0b19b60f72c r/w with ordered data mode. Quota mode: none. Oct 28 13:07:11.189404 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 28 13:07:11.190856 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 28 13:07:11.195459 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 13:07:11.196930 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 28 13:07:11.198866 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 28 13:07:11.198903 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 28 13:07:11.198930 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 13:07:11.219306 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 28 13:07:11.221557 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 28 13:07:11.228739 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Oct 28 13:07:11.228844 kernel: BTRFS info (device vda6): first mount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:07:11.228861 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 13:07:11.233480 kernel: BTRFS info (device vda6): turning on async discard Oct 28 13:07:11.233531 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 13:07:11.234659 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 13:07:11.289961 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Oct 28 13:07:11.296142 initrd-setup-root[945]: cut: /sysroot/etc/group: No such file or directory Oct 28 13:07:11.302216 initrd-setup-root[952]: cut: /sysroot/etc/shadow: No such file or directory Oct 28 13:07:11.307739 initrd-setup-root[959]: cut: /sysroot/etc/gshadow: No such file or directory Oct 28 13:07:11.424736 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 28 13:07:11.427929 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 28 13:07:11.430276 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 28 13:07:11.458355 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 28 13:07:11.460897 kernel: BTRFS info (device vda6): last unmount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:07:11.478992 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 28 13:07:11.517373 ignition[1028]: INFO : Ignition 2.22.0 Oct 28 13:07:11.517373 ignition[1028]: INFO : Stage: mount Oct 28 13:07:11.519909 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 13:07:11.519909 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:07:11.519909 ignition[1028]: INFO : mount: mount passed Oct 28 13:07:11.519909 ignition[1028]: INFO : Ignition finished successfully Oct 28 13:07:11.520696 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 28 13:07:11.524168 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 28 13:07:11.546657 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 13:07:11.569327 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1040) Oct 28 13:07:11.569358 kernel: BTRFS info (device vda6): first mount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:07:11.569370 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 13:07:11.574983 kernel: BTRFS info (device vda6): turning on async discard Oct 28 13:07:11.575004 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 13:07:11.576570 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 13:07:11.615599 ignition[1057]: INFO : Ignition 2.22.0 Oct 28 13:07:11.615599 ignition[1057]: INFO : Stage: files Oct 28 13:07:11.618228 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 13:07:11.618228 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:07:11.618228 ignition[1057]: DEBUG : files: compiled without relabeling support, skipping Oct 28 13:07:11.618228 ignition[1057]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 28 13:07:11.618228 ignition[1057]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 28 13:07:11.628314 ignition[1057]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 28 13:07:11.628314 ignition[1057]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 28 13:07:11.628314 ignition[1057]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 28 13:07:11.628314 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 28 13:07:11.628314 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 28 13:07:11.622112 unknown[1057]: wrote ssh authorized keys file for user: core Oct 28 13:07:11.678249 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 28 13:07:11.745221 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 28 13:07:11.748345 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 28 13:07:11.748345 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 28 13:07:12.048304 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 28 13:07:12.281594 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 28 13:07:12.284724 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 28 13:07:12.284724 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 28 13:07:12.284724 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 28 13:07:12.284724 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 28 13:07:12.284724 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 13:07:12.284724 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 13:07:12.284724 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 13:07:12.284724 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 13:07:12.307097 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 13:07:12.307097 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 13:07:12.307097 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 28 13:07:12.307097 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 28 13:07:12.307097 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 28 13:07:12.307097 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 28 13:07:12.701353 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 28 13:07:13.087883 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 28 13:07:13.087883 ignition[1057]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 28 13:07:13.093652 ignition[1057]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 13:07:13.100962 ignition[1057]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 13:07:13.100962 ignition[1057]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 28 13:07:13.100962 ignition[1057]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 28 13:07:13.108672 ignition[1057]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 13:07:13.108672 ignition[1057]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 13:07:13.108672 ignition[1057]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 28 13:07:13.108672 ignition[1057]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 28 13:07:13.127062 ignition[1057]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 13:07:13.136992 ignition[1057]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 13:07:13.139659 ignition[1057]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 28 13:07:13.139659 ignition[1057]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 28 13:07:13.139659 ignition[1057]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 28 13:07:13.139659 ignition[1057]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 28 13:07:13.139659 ignition[1057]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 28 13:07:13.139659 ignition[1057]: INFO : files: files passed Oct 28 13:07:13.139659 ignition[1057]: INFO : Ignition finished successfully Oct 28 13:07:13.145904 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 28 13:07:13.152268 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 28 13:07:13.155178 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 28 13:07:13.179450 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 28 13:07:13.179587 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 28 13:07:13.186001 initrd-setup-root-after-ignition[1088]: grep: /sysroot/oem/oem-release: No such file or directory Oct 28 13:07:13.191819 initrd-setup-root-after-ignition[1090]: grep: Oct 28 13:07:13.193231 initrd-setup-root-after-ignition[1094]: grep: Oct 28 13:07:13.194541 initrd-setup-root-after-ignition[1090]: /sysroot/etc/flatcar/enabled-sysext.conf Oct 28 13:07:13.196692 initrd-setup-root-after-ignition[1094]: /sysroot/etc/flatcar/enabled-sysext.conf Oct 28 13:07:13.196692 initrd-setup-root-after-ignition[1090]: : No such file or directory Oct 28 13:07:13.199256 initrd-setup-root-after-ignition[1094]: : No such file or directory Oct 28 13:07:13.197530 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 13:07:13.204372 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 28 13:07:13.203700 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 28 13:07:13.210050 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 28 13:07:13.261108 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 28 13:07:13.261283 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 28 13:07:13.262691 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 28 13:07:13.267439 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 28 13:07:13.272090 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 28 13:07:13.273094 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 28 13:07:13.313553 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 13:07:13.319054 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 28 13:07:13.349928 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 13:07:13.350156 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 28 13:07:13.351248 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 13:07:13.351837 systemd[1]: Stopped target timers.target - Timer Units. Oct 28 13:07:13.359564 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 28 13:07:13.359715 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 13:07:13.365058 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 28 13:07:13.366198 systemd[1]: Stopped target basic.target - Basic System. Oct 28 13:07:13.370455 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 28 13:07:13.371280 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 13:07:13.376428 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 28 13:07:13.380256 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 28 13:07:13.383372 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 28 13:07:13.386763 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 13:07:13.389911 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 28 13:07:13.393690 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 28 13:07:13.396815 systemd[1]: Stopped target swap.target - Swaps. Oct 28 13:07:13.397301 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 28 13:07:13.397448 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 28 13:07:13.404861 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 28 13:07:13.405699 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 13:07:13.406224 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 28 13:07:13.413134 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 13:07:13.414305 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 28 13:07:13.414438 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 28 13:07:13.421840 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 28 13:07:13.421977 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 13:07:13.422730 systemd[1]: Stopped target paths.target - Path Units. Oct 28 13:07:13.427283 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 28 13:07:13.428835 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 13:07:13.430395 systemd[1]: Stopped target slices.target - Slice Units. Oct 28 13:07:13.433610 systemd[1]: Stopped target sockets.target - Socket Units. Oct 28 13:07:13.436653 systemd[1]: iscsid.socket: Deactivated successfully. Oct 28 13:07:13.436771 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 13:07:13.440416 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 28 13:07:13.440513 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 13:07:13.443276 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 28 13:07:13.443401 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 13:07:13.446472 systemd[1]: ignition-files.service: Deactivated successfully. Oct 28 13:07:13.446584 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 28 13:07:13.456525 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 28 13:07:13.459685 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 28 13:07:13.459840 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 13:07:13.473487 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 28 13:07:13.474157 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 28 13:07:13.474282 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 13:07:13.477239 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 28 13:07:13.477353 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 13:07:13.480756 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 28 13:07:13.480880 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 13:07:13.491105 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 28 13:07:13.606955 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 28 13:07:13.619900 ignition[1114]: INFO : Ignition 2.22.0 Oct 28 13:07:13.619900 ignition[1114]: INFO : Stage: umount Oct 28 13:07:13.623341 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 13:07:13.623341 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:07:13.623341 ignition[1114]: INFO : umount: umount passed Oct 28 13:07:13.623341 ignition[1114]: INFO : Ignition finished successfully Oct 28 13:07:13.627825 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 28 13:07:13.627953 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 28 13:07:13.632388 systemd[1]: Stopped target network.target - Network. Oct 28 13:07:13.635236 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 28 13:07:13.635339 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 28 13:07:13.638571 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 28 13:07:13.638635 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 28 13:07:13.639616 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 28 13:07:13.639666 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 28 13:07:13.647248 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 28 13:07:13.647308 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 28 13:07:13.650444 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 28 13:07:13.653535 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 28 13:07:13.657590 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 28 13:07:13.667914 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 28 13:07:13.668164 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 28 13:07:13.674552 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 28 13:07:13.674726 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 28 13:07:13.682864 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 28 13:07:13.686430 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 28 13:07:13.686490 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 28 13:07:13.690942 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 28 13:07:13.692499 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 28 13:07:13.692601 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 13:07:13.693163 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 28 13:07:13.693209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 28 13:07:13.697384 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 28 13:07:13.697437 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 28 13:07:13.698294 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 13:07:13.721163 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 28 13:07:13.722213 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 13:07:13.722962 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 28 13:07:13.723014 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 28 13:07:13.728385 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 28 13:07:13.728451 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 13:07:13.731548 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 28 13:07:13.731630 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 28 13:07:13.737643 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 28 13:07:13.737699 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 28 13:07:13.742360 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 28 13:07:13.742417 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 13:07:13.747446 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 28 13:07:13.748273 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 28 13:07:13.748366 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 13:07:13.754260 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 28 13:07:13.754323 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 13:07:13.755595 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 13:07:13.755669 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:07:13.760144 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 28 13:07:13.776951 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 28 13:07:13.781343 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 28 13:07:13.781424 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 28 13:07:13.786526 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 28 13:07:13.786659 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 28 13:07:13.790991 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 28 13:07:13.791140 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 28 13:07:13.794020 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 28 13:07:13.795746 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 28 13:07:13.809599 systemd[1]: Switching root. Oct 28 13:07:13.853766 systemd-journald[314]: Journal stopped Oct 28 13:07:15.209814 systemd-journald[314]: Received SIGTERM from PID 1 (systemd). Oct 28 13:07:15.209887 kernel: SELinux: policy capability network_peer_controls=1 Oct 28 13:07:15.209907 kernel: SELinux: policy capability open_perms=1 Oct 28 13:07:15.209930 kernel: SELinux: policy capability extended_socket_class=1 Oct 28 13:07:15.209942 kernel: SELinux: policy capability always_check_network=0 Oct 28 13:07:15.209956 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 28 13:07:15.209969 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 28 13:07:15.209985 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 28 13:07:15.210001 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 28 13:07:15.210013 kernel: SELinux: policy capability userspace_initial_context=0 Oct 28 13:07:15.210033 kernel: audit: type=1403 audit(1761656834.295:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 28 13:07:15.210050 systemd[1]: Successfully loaded SELinux policy in 70.199ms. Oct 28 13:07:15.210067 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 21.358ms. Oct 28 13:07:15.210080 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 13:07:15.210093 systemd[1]: Detected virtualization kvm. Oct 28 13:07:15.210106 systemd[1]: Detected architecture x86-64. Oct 28 13:07:15.210121 systemd[1]: Detected first boot. Oct 28 13:07:15.210141 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 13:07:15.210153 kernel: Guest personality initialized and is inactive Oct 28 13:07:15.210165 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 28 13:07:15.210178 zram_generator::config[1159]: No configuration found. Oct 28 13:07:15.210192 kernel: Initialized host personality Oct 28 13:07:15.210209 kernel: NET: Registered PF_VSOCK protocol family Oct 28 13:07:15.210234 systemd[1]: Populated /etc with preset unit settings. Oct 28 13:07:15.210248 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 28 13:07:15.210260 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 28 13:07:15.210273 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 28 13:07:15.210287 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 28 13:07:15.210300 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 28 13:07:15.210313 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 28 13:07:15.210333 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 28 13:07:15.210350 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 28 13:07:15.210362 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 28 13:07:15.210376 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 28 13:07:15.210389 systemd[1]: Created slice user.slice - User and Session Slice. Oct 28 13:07:15.210401 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 13:07:15.210414 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 13:07:15.210435 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 28 13:07:15.210448 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 28 13:07:15.210461 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 28 13:07:15.210477 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 13:07:15.210490 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 28 13:07:15.210504 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 13:07:15.210524 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 13:07:15.210537 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 28 13:07:15.210549 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 28 13:07:15.210562 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 28 13:07:15.210574 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 28 13:07:15.210587 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 13:07:15.210600 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 13:07:15.210619 systemd[1]: Reached target slices.target - Slice Units. Oct 28 13:07:15.210632 systemd[1]: Reached target swap.target - Swaps. Oct 28 13:07:15.210645 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 28 13:07:15.210658 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 28 13:07:15.210671 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 28 13:07:15.210693 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 13:07:15.210707 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 13:07:15.210720 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 13:07:15.210741 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 28 13:07:15.210760 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 28 13:07:15.210773 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 28 13:07:15.210801 systemd[1]: Mounting media.mount - External Media Directory... Oct 28 13:07:15.210814 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:07:15.210827 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 28 13:07:15.210840 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 28 13:07:15.210861 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 28 13:07:15.210874 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 28 13:07:15.210887 systemd[1]: Reached target machines.target - Containers. Oct 28 13:07:15.210900 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 28 13:07:15.210913 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 13:07:15.210926 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 13:07:15.210946 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 28 13:07:15.210959 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 13:07:15.210972 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 13:07:15.210984 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 13:07:15.210997 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 28 13:07:15.211010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 13:07:15.211022 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 28 13:07:15.211042 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 28 13:07:15.211056 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 28 13:07:15.211069 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 28 13:07:15.211082 systemd[1]: Stopped systemd-fsck-usr.service. Oct 28 13:07:15.211096 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 13:07:15.211109 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 13:07:15.211121 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 13:07:15.211141 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 13:07:15.211155 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 28 13:07:15.211168 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 28 13:07:15.211181 kernel: ACPI: bus type drm_connector registered Oct 28 13:07:15.211193 kernel: fuse: init (API version 7.41) Oct 28 13:07:15.211204 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 13:07:15.211225 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:07:15.211257 systemd-journald[1237]: Collecting audit messages is disabled. Oct 28 13:07:15.211285 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 28 13:07:15.211298 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 28 13:07:15.211318 systemd-journald[1237]: Journal started Oct 28 13:07:15.211341 systemd-journald[1237]: Runtime Journal (/run/log/journal/5c22daffa71543c195098f70d9055782) is 6M, max 48.3M, 42.2M free. Oct 28 13:07:14.875099 systemd[1]: Queued start job for default target multi-user.target. Oct 28 13:07:14.900866 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 28 13:07:14.901401 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 28 13:07:15.217462 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 13:07:15.218835 systemd[1]: Mounted media.mount - External Media Directory. Oct 28 13:07:15.220608 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 28 13:07:15.222518 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 28 13:07:15.224467 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 28 13:07:15.226499 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 28 13:07:15.228767 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 13:07:15.231102 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 28 13:07:15.231322 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 28 13:07:15.233512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 13:07:15.233745 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 13:07:15.236147 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 13:07:15.236429 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 13:07:15.238569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 13:07:15.238867 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 13:07:15.241293 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 28 13:07:15.241655 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 28 13:07:15.243981 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 13:07:15.244263 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 13:07:15.246421 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 13:07:15.248657 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 13:07:15.251958 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 28 13:07:15.254765 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 28 13:07:15.272537 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 13:07:15.275134 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 28 13:07:15.277160 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 28 13:07:15.277188 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 13:07:15.279972 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 28 13:07:15.282163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 13:07:15.283892 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 28 13:07:15.286759 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 28 13:07:15.288763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 13:07:15.299520 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 28 13:07:15.301563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 13:07:15.304939 systemd-journald[1237]: Time spent on flushing to /var/log/journal/5c22daffa71543c195098f70d9055782 is 21.421ms for 977 entries. Oct 28 13:07:15.304939 systemd-journald[1237]: System Journal (/var/log/journal/5c22daffa71543c195098f70d9055782) is 8M, max 163.5M, 155.5M free. Oct 28 13:07:15.347651 systemd-journald[1237]: Received client request to flush runtime journal. Oct 28 13:07:15.304473 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 13:07:15.312034 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 28 13:07:15.316994 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 28 13:07:15.321027 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 13:07:15.323433 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 28 13:07:15.327026 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 28 13:07:15.375824 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 28 13:07:15.378637 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 28 13:07:15.381856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 13:07:15.386820 kernel: loop1: detected capacity change from 0 to 110984 Oct 28 13:07:15.404556 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 28 13:07:15.418449 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 28 13:07:15.422473 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 13:07:15.425041 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 13:07:15.428962 kernel: loop2: detected capacity change from 0 to 229808 Oct 28 13:07:15.438877 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 28 13:07:15.454807 kernel: loop3: detected capacity change from 0 to 118328 Oct 28 13:07:15.456593 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Oct 28 13:07:15.456618 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Oct 28 13:07:15.467954 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 13:07:15.489839 kernel: loop4: detected capacity change from 0 to 110984 Oct 28 13:07:15.491554 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 28 13:07:15.503819 kernel: loop5: detected capacity change from 0 to 229808 Oct 28 13:07:15.513879 kernel: loop6: detected capacity change from 0 to 118328 Oct 28 13:07:15.520885 (sd-merge)[1297]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 28 13:07:15.525083 (sd-merge)[1297]: Merged extensions into '/usr'. Oct 28 13:07:15.530715 systemd[1]: Reload requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... Oct 28 13:07:15.530734 systemd[1]: Reloading... Oct 28 13:07:15.561425 systemd-resolved[1290]: Positive Trust Anchors: Oct 28 13:07:15.561437 systemd-resolved[1290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 13:07:15.561442 systemd-resolved[1290]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 13:07:15.561473 systemd-resolved[1290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 13:07:15.565611 systemd-resolved[1290]: Defaulting to hostname 'linux'. Oct 28 13:07:15.594814 zram_generator::config[1336]: No configuration found. Oct 28 13:07:15.784042 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 28 13:07:15.784471 systemd[1]: Reloading finished in 253 ms. Oct 28 13:07:15.815908 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 13:07:15.818353 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 28 13:07:15.822951 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 13:07:15.837532 systemd[1]: Starting ensure-sysext.service... Oct 28 13:07:15.840112 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 13:07:15.869227 systemd[1]: Reload requested from client PID 1367 ('systemctl') (unit ensure-sysext.service)... Oct 28 13:07:15.869250 systemd[1]: Reloading... Oct 28 13:07:15.880893 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 28 13:07:15.880931 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 28 13:07:15.881260 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 28 13:07:15.881810 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 28 13:07:15.882943 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 28 13:07:15.883343 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Oct 28 13:07:15.883451 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Oct 28 13:07:15.890859 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 13:07:15.890881 systemd-tmpfiles[1368]: Skipping /boot Oct 28 13:07:15.904525 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 13:07:15.904543 systemd-tmpfiles[1368]: Skipping /boot Oct 28 13:07:15.946818 zram_generator::config[1401]: No configuration found. Oct 28 13:07:16.123217 systemd[1]: Reloading finished in 253 ms. Oct 28 13:07:16.144454 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 28 13:07:16.170403 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 13:07:16.181713 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 28 13:07:16.184908 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 28 13:07:16.207220 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 13:07:16.213348 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 28 13:07:16.218533 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 28 13:07:16.231330 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 28 13:07:16.236697 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 13:07:16.246984 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 28 13:07:16.257826 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 28 13:07:16.262975 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 28 13:07:16.280598 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:07:16.281162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 13:07:16.284853 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 13:07:16.288253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 13:07:16.298994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 13:07:16.300079 systemd-udevd[1449]: Using default interface naming scheme 'v257'. Oct 28 13:07:16.301046 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 13:07:16.301209 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 13:07:16.301315 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:07:16.303028 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 28 13:07:16.308026 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 28 13:07:16.312358 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 13:07:16.312582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 13:07:16.315107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 13:07:16.315318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 13:07:16.318028 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 13:07:16.318244 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 13:07:16.323052 augenrules[1469]: No rules Oct 28 13:07:16.324997 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 13:07:16.325274 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 13:07:16.339055 systemd[1]: Finished ensure-sysext.service. Oct 28 13:07:16.343171 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:07:16.345444 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 13:07:16.347979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 13:07:16.349100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 13:07:16.353405 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 13:07:16.356011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 13:07:16.529753 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 13:07:16.531615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 13:07:16.531676 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 13:07:16.533604 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 28 13:07:16.535568 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:07:16.536012 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 13:07:16.538908 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 28 13:07:16.541323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 13:07:16.541541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 13:07:16.581556 augenrules[1481]: /sbin/augenrules: No change Oct 28 13:07:16.586453 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 13:07:16.588456 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 28 13:07:16.589058 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 13:07:16.590846 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 13:07:16.593429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 13:07:16.593679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 13:07:16.601483 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 13:07:16.602906 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 13:07:16.603156 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 13:07:16.611265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 13:07:16.624501 augenrules[1535]: No rules Oct 28 13:07:16.626012 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 28 13:07:16.628522 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 13:07:16.629239 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 13:07:16.700988 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 13:07:16.709973 kernel: mousedev: PS/2 mouse device common for all mice Oct 28 13:07:16.707903 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 28 13:07:16.715941 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 28 13:07:16.718400 systemd[1]: Reached target time-set.target - System Time Set. Oct 28 13:07:16.733811 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 28 13:07:16.737817 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 28 13:07:16.738829 kernel: ACPI: button: Power Button [PWRF] Oct 28 13:07:16.747439 systemd-networkd[1522]: lo: Link UP Oct 28 13:07:16.747454 systemd-networkd[1522]: lo: Gained carrier Oct 28 13:07:16.750126 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 13:07:16.750591 systemd-networkd[1522]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 13:07:16.750605 systemd-networkd[1522]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 13:07:16.752181 systemd[1]: Reached target network.target - Network. Oct 28 13:07:16.752389 systemd-networkd[1522]: eth0: Link UP Oct 28 13:07:16.753871 systemd-networkd[1522]: eth0: Gained carrier Oct 28 13:07:16.753896 systemd-networkd[1522]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 13:07:16.755195 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 28 13:07:16.758358 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 28 13:07:16.822847 systemd-networkd[1522]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 13:07:16.824567 systemd-timesyncd[1507]: Network configuration changed, trying to establish connection. Oct 28 13:07:16.827430 systemd-timesyncd[1507]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 28 13:07:16.827530 systemd-timesyncd[1507]: Initial clock synchronization to Tue 2025-10-28 13:07:16.635565 UTC. Oct 28 13:07:16.863690 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 28 13:07:16.900280 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 28 13:07:16.900750 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 28 13:07:16.958642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:07:17.081149 kernel: kvm_amd: TSC scaling supported Oct 28 13:07:17.081248 kernel: kvm_amd: Nested Virtualization enabled Oct 28 13:07:17.081267 kernel: kvm_amd: Nested Paging enabled Oct 28 13:07:17.081886 kernel: kvm_amd: LBR virtualization supported Oct 28 13:07:17.082834 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 28 13:07:17.083838 kernel: kvm_amd: Virtual GIF supported Oct 28 13:07:17.114811 kernel: EDAC MC: Ver: 3.0.0 Oct 28 13:07:17.169040 ldconfig[1441]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 28 13:07:17.177053 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 28 13:07:17.208840 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:07:17.214519 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 28 13:07:17.245773 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 28 13:07:17.247947 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 13:07:17.249763 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 28 13:07:17.251774 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 28 13:07:17.253962 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 28 13:07:17.255980 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 28 13:07:17.257836 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 28 13:07:17.259847 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 28 13:07:17.261914 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 28 13:07:17.261952 systemd[1]: Reached target paths.target - Path Units. Oct 28 13:07:17.263436 systemd[1]: Reached target timers.target - Timer Units. Oct 28 13:07:17.266233 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 28 13:07:17.269802 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 28 13:07:17.274386 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 28 13:07:17.276574 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 28 13:07:17.278567 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 28 13:07:17.282629 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 28 13:07:17.284572 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 28 13:07:17.287029 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 28 13:07:17.289430 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 13:07:17.290964 systemd[1]: Reached target basic.target - Basic System. Oct 28 13:07:17.292500 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 28 13:07:17.292532 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 28 13:07:17.293699 systemd[1]: Starting containerd.service - containerd container runtime... Oct 28 13:07:17.296357 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 28 13:07:17.298920 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 28 13:07:17.300694 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 28 13:07:17.308884 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 28 13:07:17.310455 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 28 13:07:17.311466 jq[1588]: false Oct 28 13:07:17.311521 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 28 13:07:17.314158 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 28 13:07:17.316416 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 28 13:07:17.320540 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 28 13:07:17.324913 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 28 13:07:17.327298 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Refreshing passwd entry cache Oct 28 13:07:17.325911 oslogin_cache_refresh[1590]: Refreshing passwd entry cache Oct 28 13:07:17.330145 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 28 13:07:17.331862 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 28 13:07:17.332406 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 28 13:07:17.333566 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Failure getting users, quitting Oct 28 13:07:17.333559 oslogin_cache_refresh[1590]: Failure getting users, quitting Oct 28 13:07:17.333639 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 28 13:07:17.333639 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Refreshing group entry cache Oct 28 13:07:17.333586 oslogin_cache_refresh[1590]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 28 13:07:17.333640 oslogin_cache_refresh[1590]: Refreshing group entry cache Oct 28 13:07:17.334147 systemd[1]: Starting update-engine.service - Update Engine... Oct 28 13:07:17.337199 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 28 13:07:17.341063 extend-filesystems[1589]: Found /dev/vda6 Oct 28 13:07:17.342877 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 28 13:07:17.347536 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 28 13:07:17.349591 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Failure getting groups, quitting Oct 28 13:07:17.349591 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 28 13:07:17.349582 oslogin_cache_refresh[1590]: Failure getting groups, quitting Oct 28 13:07:17.349595 oslogin_cache_refresh[1590]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 28 13:07:17.352150 extend-filesystems[1589]: Found /dev/vda9 Oct 28 13:07:17.354703 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 28 13:07:17.358142 extend-filesystems[1589]: Checking size of /dev/vda9 Oct 28 13:07:17.355092 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 28 13:07:17.355328 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 28 13:07:17.360920 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 28 13:07:17.361202 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 28 13:07:17.373184 jq[1601]: true Oct 28 13:07:17.376654 update_engine[1599]: I20251028 13:07:17.376585 1599 main.cc:92] Flatcar Update Engine starting Oct 28 13:07:17.380091 extend-filesystems[1589]: Resized partition /dev/vda9 Oct 28 13:07:17.387156 extend-filesystems[1632]: resize2fs 1.47.3 (8-Jul-2025) Oct 28 13:07:17.394181 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 28 13:07:17.415335 systemd[1]: motdgen.service: Deactivated successfully. Oct 28 13:07:17.415800 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 28 13:07:17.422435 tar[1619]: linux-amd64/LICENSE Oct 28 13:07:17.427011 tar[1619]: linux-amd64/helm Oct 28 13:07:17.432729 systemd-logind[1598]: Watching system buttons on /dev/input/event2 (Power Button) Oct 28 13:07:17.432776 systemd-logind[1598]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 28 13:07:17.445065 jq[1629]: true Oct 28 13:07:17.433197 systemd-logind[1598]: New seat seat0. Oct 28 13:07:17.434827 systemd[1]: Started systemd-logind.service - User Login Management. Oct 28 13:07:17.448827 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 28 13:07:17.479994 extend-filesystems[1632]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 28 13:07:17.479994 extend-filesystems[1632]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 28 13:07:17.479994 extend-filesystems[1632]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 28 13:07:17.479729 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 28 13:07:17.481597 dbus-daemon[1586]: [system] SELinux support is enabled Oct 28 13:07:17.488278 extend-filesystems[1589]: Resized filesystem in /dev/vda9 Oct 28 13:07:17.494240 update_engine[1599]: I20251028 13:07:17.494170 1599 update_check_scheduler.cc:74] Next update check in 9m56s Oct 28 13:07:17.508286 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 28 13:07:17.510932 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 28 13:07:17.524914 dbus-daemon[1586]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 28 13:07:17.526874 systemd[1]: Started update-engine.service - Update Engine. Oct 28 13:07:17.529755 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 28 13:07:17.529963 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 28 13:07:17.538390 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 28 13:07:17.538563 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 28 13:07:17.564405 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 28 13:07:17.608563 bash[1656]: Updated "/home/core/.ssh/authorized_keys" Oct 28 13:07:17.614860 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 28 13:07:17.622581 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 28 13:07:17.624111 sshd_keygen[1608]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 28 13:07:17.699550 locksmithd[1658]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 28 13:07:17.700379 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 28 13:07:17.705103 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 28 13:07:17.777975 systemd[1]: issuegen.service: Deactivated successfully. Oct 28 13:07:17.778263 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 28 13:07:17.782856 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 28 13:07:17.807432 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 28 13:07:17.811125 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 28 13:07:17.817349 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 28 13:07:17.820155 systemd[1]: Reached target getty.target - Login Prompts. Oct 28 13:07:17.841220 containerd[1620]: time="2025-10-28T13:07:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 28 13:07:17.842223 containerd[1620]: time="2025-10-28T13:07:17.842162301Z" level=info msg="starting containerd" revision=cb1076646aa3740577fafbf3d914198b7fe8e3f7 version=v2.1.4 Oct 28 13:07:17.860585 containerd[1620]: time="2025-10-28T13:07:17.859156708Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="20.466µs" Oct 28 13:07:17.860585 containerd[1620]: time="2025-10-28T13:07:17.859202452Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 28 13:07:17.860585 containerd[1620]: time="2025-10-28T13:07:17.859264106Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 28 13:07:17.860585 containerd[1620]: time="2025-10-28T13:07:17.859276095Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 28 13:07:17.860585 containerd[1620]: time="2025-10-28T13:07:17.859483929Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 28 13:07:17.860585 containerd[1620]: time="2025-10-28T13:07:17.859497786Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 13:07:17.860585 containerd[1620]: time="2025-10-28T13:07:17.859582860Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 13:07:17.860585 containerd[1620]: time="2025-10-28T13:07:17.859596599Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 13:07:17.860585 containerd[1620]: time="2025-10-28T13:07:17.860089608Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 13:07:17.860585 containerd[1620]: time="2025-10-28T13:07:17.860121359Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 13:07:17.860585 containerd[1620]: time="2025-10-28T13:07:17.860140975Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 13:07:17.860585 containerd[1620]: time="2025-10-28T13:07:17.860152103Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Oct 28 13:07:17.860997 containerd[1620]: time="2025-10-28T13:07:17.860337359Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Oct 28 13:07:17.860997 containerd[1620]: time="2025-10-28T13:07:17.860350853Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 28 13:07:17.860997 containerd[1620]: time="2025-10-28T13:07:17.860465908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 28 13:07:17.860997 containerd[1620]: time="2025-10-28T13:07:17.860735867Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 13:07:17.860997 containerd[1620]: time="2025-10-28T13:07:17.860793746Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 13:07:17.860997 containerd[1620]: time="2025-10-28T13:07:17.860808307Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 28 13:07:17.860997 containerd[1620]: time="2025-10-28T13:07:17.860848703Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 28 13:07:17.861146 containerd[1620]: time="2025-10-28T13:07:17.861119580Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 28 13:07:17.861248 containerd[1620]: time="2025-10-28T13:07:17.861200321Z" level=info msg="metadata content store policy set" policy=shared Oct 28 13:07:17.869661 containerd[1620]: time="2025-10-28T13:07:17.869545619Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 28 13:07:17.869661 containerd[1620]: time="2025-10-28T13:07:17.869620024Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Oct 28 13:07:17.869942 containerd[1620]: time="2025-10-28T13:07:17.869761707Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 28 13:07:17.869942 containerd[1620]: time="2025-10-28T13:07:17.869796665Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 28 13:07:17.869942 containerd[1620]: time="2025-10-28T13:07:17.869809113Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 28 13:07:17.869942 containerd[1620]: time="2025-10-28T13:07:17.869820770Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 28 13:07:17.869942 containerd[1620]: time="2025-10-28T13:07:17.869830284Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 28 13:07:17.869942 containerd[1620]: time="2025-10-28T13:07:17.869847181Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 28 13:07:17.869942 containerd[1620]: time="2025-10-28T13:07:17.869876674Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 28 13:07:17.869942 containerd[1620]: time="2025-10-28T13:07:17.869894227Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 28 13:07:17.869942 containerd[1620]: time="2025-10-28T13:07:17.869905560Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 28 13:07:17.869942 containerd[1620]: time="2025-10-28T13:07:17.869915211Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 28 13:07:17.869942 containerd[1620]: time="2025-10-28T13:07:17.869927806Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 28 13:07:17.870155 containerd[1620]: time="2025-10-28T13:07:17.870067298Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 28 13:07:17.870155 containerd[1620]: time="2025-10-28T13:07:17.870102316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 28 13:07:17.870155 containerd[1620]: time="2025-10-28T13:07:17.870140071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 28 13:07:17.870155 containerd[1620]: time="2025-10-28T13:07:17.870153555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 28 13:07:17.870236 containerd[1620]: time="2025-10-28T13:07:17.870164498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 28 13:07:17.870236 containerd[1620]: time="2025-10-28T13:07:17.870180388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 28 13:07:17.870236 containerd[1620]: time="2025-10-28T13:07:17.870191379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 28 13:07:17.870236 containerd[1620]: time="2025-10-28T13:07:17.870201089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 28 13:07:17.870236 containerd[1620]: time="2025-10-28T13:07:17.870229282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 28 13:07:17.870350 containerd[1620]: time="2025-10-28T13:07:17.870241798Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 28 13:07:17.870350 containerd[1620]: time="2025-10-28T13:07:17.870253278Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 28 13:07:17.870350 containerd[1620]: time="2025-10-28T13:07:17.870288578Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 28 13:07:17.870350 containerd[1620]: time="2025-10-28T13:07:17.870344033Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 28 13:07:17.870432 containerd[1620]: time="2025-10-28T13:07:17.870375315Z" level=info msg="Start snapshots syncer" Oct 28 13:07:17.870432 containerd[1620]: time="2025-10-28T13:07:17.870405336Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 28 13:07:17.870706 containerd[1620]: time="2025-10-28T13:07:17.870669475Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 28 13:07:17.870919 containerd[1620]: time="2025-10-28T13:07:17.870732049Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 28 13:07:17.870919 containerd[1620]: time="2025-10-28T13:07:17.870841883Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 28 13:07:17.870959 containerd[1620]: time="2025-10-28T13:07:17.870944802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 28 13:07:17.871021 containerd[1620]: time="2025-10-28T13:07:17.870976231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 28 13:07:17.871021 containerd[1620]: time="2025-10-28T13:07:17.870994184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 28 13:07:17.871021 containerd[1620]: time="2025-10-28T13:07:17.871005342Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871031128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871042676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871052083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871061501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871071054Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871116672Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871129667Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871139396Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871147757Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871155268Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871164039Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871191087Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871205285Z" level=info msg="runtime interface created" Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871210067Z" level=info msg="created NRI interface" Oct 28 13:07:17.871241 containerd[1620]: time="2025-10-28T13:07:17.871226485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 28 13:07:17.871619 containerd[1620]: time="2025-10-28T13:07:17.871260505Z" level=info msg="Connect containerd service" Oct 28 13:07:17.871619 containerd[1620]: time="2025-10-28T13:07:17.871291180Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 28 13:07:17.872205 containerd[1620]: time="2025-10-28T13:07:17.872173300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 28 13:07:17.960938 systemd-networkd[1522]: eth0: Gained IPv6LL Oct 28 13:07:17.964061 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 28 13:07:17.967592 systemd[1]: Reached target network-online.target - Network is Online. Oct 28 13:07:17.971570 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 28 13:07:17.976089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:07:17.980977 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 28 13:07:18.078263 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 28 13:07:18.081281 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 28 13:07:18.081556 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 28 13:07:18.086041 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 28 13:07:18.192640 tar[1619]: linux-amd64/README.md Oct 28 13:07:18.194335 containerd[1620]: time="2025-10-28T13:07:18.193249855Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 28 13:07:18.194335 containerd[1620]: time="2025-10-28T13:07:18.193325028Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 28 13:07:18.194335 containerd[1620]: time="2025-10-28T13:07:18.193377103Z" level=info msg="Start subscribing containerd event" Oct 28 13:07:18.194335 containerd[1620]: time="2025-10-28T13:07:18.193424772Z" level=info msg="Start recovering state" Oct 28 13:07:18.194335 containerd[1620]: time="2025-10-28T13:07:18.193604104Z" level=info msg="Start event monitor" Oct 28 13:07:18.194335 containerd[1620]: time="2025-10-28T13:07:18.193621309Z" level=info msg="Start cni network conf syncer for default" Oct 28 13:07:18.194335 containerd[1620]: time="2025-10-28T13:07:18.193629586Z" level=info msg="Start streaming server" Oct 28 13:07:18.194335 containerd[1620]: time="2025-10-28T13:07:18.193650390Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 28 13:07:18.194335 containerd[1620]: time="2025-10-28T13:07:18.193662691Z" level=info msg="runtime interface starting up..." Oct 28 13:07:18.194335 containerd[1620]: time="2025-10-28T13:07:18.193669969Z" level=info msg="starting plugins..." Oct 28 13:07:18.194335 containerd[1620]: time="2025-10-28T13:07:18.193691597Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 28 13:07:18.197798 containerd[1620]: time="2025-10-28T13:07:18.196611991Z" level=info msg="containerd successfully booted in 0.355985s" Oct 28 13:07:18.197027 systemd[1]: Started containerd.service - containerd container runtime. Oct 28 13:07:18.224805 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 28 13:07:19.375013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:07:19.379291 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 13:07:19.379480 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 28 13:07:19.381348 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 28 13:07:19.384416 systemd[1]: Started sshd@0-10.0.0.28:22-10.0.0.1:54982.service - OpenSSH per-connection server daemon (10.0.0.1:54982). Oct 28 13:07:19.387134 systemd[1]: Startup finished in 3.259s (kernel) + 6.713s (initrd) + 5.158s (userspace) = 15.131s. Oct 28 13:07:19.467071 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 54982 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:07:19.468823 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:07:19.475686 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 28 13:07:19.476991 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 28 13:07:19.484731 systemd-logind[1598]: New session 1 of user core. Oct 28 13:07:19.515015 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 28 13:07:19.517903 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 28 13:07:19.555425 (systemd)[1740]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 28 13:07:19.558230 systemd-logind[1598]: New session c1 of user core. Oct 28 13:07:19.704447 systemd[1740]: Queued start job for default target default.target. Oct 28 13:07:19.741472 systemd[1740]: Created slice app.slice - User Application Slice. Oct 28 13:07:19.741496 systemd[1740]: Reached target paths.target - Paths. Oct 28 13:07:19.741542 systemd[1740]: Reached target timers.target - Timers. Oct 28 13:07:19.744359 systemd[1740]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 28 13:07:19.760852 systemd[1740]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 28 13:07:19.761651 systemd[1740]: Reached target sockets.target - Sockets. Oct 28 13:07:19.762009 systemd[1740]: Reached target basic.target - Basic System. Oct 28 13:07:19.762062 systemd[1740]: Reached target default.target - Main User Target. Oct 28 13:07:19.762096 systemd[1740]: Startup finished in 192ms. Oct 28 13:07:19.762245 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 28 13:07:19.769977 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 28 13:07:19.849635 systemd[1]: Started sshd@1-10.0.0.28:22-10.0.0.1:54990.service - OpenSSH per-connection server daemon (10.0.0.1:54990). Oct 28 13:07:19.914475 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 54990 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:07:19.915791 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:07:19.921563 systemd-logind[1598]: New session 2 of user core. Oct 28 13:07:19.928999 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 28 13:07:19.987373 sshd[1758]: Connection closed by 10.0.0.1 port 54990 Oct 28 13:07:19.987869 kubelet[1726]: E1028 13:07:19.987214 1726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 13:07:19.988059 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Oct 28 13:07:20.001298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 13:07:20.001476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 13:07:20.001889 systemd[1]: kubelet.service: Consumed 1.783s CPU time, 267.5M memory peak. Oct 28 13:07:20.002422 systemd[1]: sshd@1-10.0.0.28:22-10.0.0.1:54990.service: Deactivated successfully. Oct 28 13:07:20.004464 systemd[1]: session-2.scope: Deactivated successfully. Oct 28 13:07:20.006147 systemd-logind[1598]: Session 2 logged out. Waiting for processes to exit. Oct 28 13:07:20.009476 systemd[1]: Started sshd@2-10.0.0.28:22-10.0.0.1:55002.service - OpenSSH per-connection server daemon (10.0.0.1:55002). Oct 28 13:07:20.010255 systemd-logind[1598]: Removed session 2. Oct 28 13:07:20.064374 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 55002 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:07:20.065574 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:07:20.070153 systemd-logind[1598]: New session 3 of user core. Oct 28 13:07:20.079902 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 28 13:07:20.128673 sshd[1768]: Connection closed by 10.0.0.1 port 55002 Oct 28 13:07:20.129030 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Oct 28 13:07:20.138316 systemd[1]: sshd@2-10.0.0.28:22-10.0.0.1:55002.service: Deactivated successfully. Oct 28 13:07:20.140195 systemd[1]: session-3.scope: Deactivated successfully. Oct 28 13:07:20.140928 systemd-logind[1598]: Session 3 logged out. Waiting for processes to exit. Oct 28 13:07:20.143819 systemd[1]: Started sshd@3-10.0.0.28:22-10.0.0.1:55016.service - OpenSSH per-connection server daemon (10.0.0.1:55016). Oct 28 13:07:20.144483 systemd-logind[1598]: Removed session 3. Oct 28 13:07:20.205403 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 55016 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:07:20.207625 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:07:20.215945 systemd-logind[1598]: New session 4 of user core. Oct 28 13:07:20.226908 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 28 13:07:20.284505 sshd[1778]: Connection closed by 10.0.0.1 port 55016 Oct 28 13:07:20.285430 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Oct 28 13:07:20.300404 systemd[1]: sshd@3-10.0.0.28:22-10.0.0.1:55016.service: Deactivated successfully. Oct 28 13:07:20.302044 systemd[1]: session-4.scope: Deactivated successfully. Oct 28 13:07:20.303496 systemd-logind[1598]: Session 4 logged out. Waiting for processes to exit. Oct 28 13:07:20.306156 systemd[1]: Started sshd@4-10.0.0.28:22-10.0.0.1:55024.service - OpenSSH per-connection server daemon (10.0.0.1:55024). Oct 28 13:07:20.306757 systemd-logind[1598]: Removed session 4. Oct 28 13:07:20.366467 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 55024 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:07:20.367709 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:07:20.371883 systemd-logind[1598]: New session 5 of user core. Oct 28 13:07:20.380905 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 28 13:07:20.441736 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 28 13:07:20.442063 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 13:07:20.456241 sudo[1788]: pam_unix(sudo:session): session closed for user root Oct 28 13:07:20.458001 sshd[1787]: Connection closed by 10.0.0.1 port 55024 Oct 28 13:07:20.458350 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Oct 28 13:07:20.471332 systemd[1]: sshd@4-10.0.0.28:22-10.0.0.1:55024.service: Deactivated successfully. Oct 28 13:07:20.473107 systemd[1]: session-5.scope: Deactivated successfully. Oct 28 13:07:20.473817 systemd-logind[1598]: Session 5 logged out. Waiting for processes to exit. Oct 28 13:07:20.476623 systemd[1]: Started sshd@5-10.0.0.28:22-10.0.0.1:55040.service - OpenSSH per-connection server daemon (10.0.0.1:55040). Oct 28 13:07:20.477341 systemd-logind[1598]: Removed session 5. Oct 28 13:07:20.533585 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 55040 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:07:20.537011 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:07:20.542870 systemd-logind[1598]: New session 6 of user core. Oct 28 13:07:20.557976 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 28 13:07:20.616816 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 28 13:07:20.617125 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 13:07:20.625029 sudo[1799]: pam_unix(sudo:session): session closed for user root Oct 28 13:07:20.632714 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 28 13:07:20.633055 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 13:07:20.644385 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 13:07:20.692126 augenrules[1821]: No rules Oct 28 13:07:20.693750 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 13:07:20.694045 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 13:07:20.695128 sudo[1798]: pam_unix(sudo:session): session closed for user root Oct 28 13:07:20.697020 sshd[1797]: Connection closed by 10.0.0.1 port 55040 Oct 28 13:07:20.697336 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Oct 28 13:07:20.706416 systemd[1]: sshd@5-10.0.0.28:22-10.0.0.1:55040.service: Deactivated successfully. Oct 28 13:07:20.708441 systemd[1]: session-6.scope: Deactivated successfully. Oct 28 13:07:20.709138 systemd-logind[1598]: Session 6 logged out. Waiting for processes to exit. Oct 28 13:07:20.711816 systemd[1]: Started sshd@6-10.0.0.28:22-10.0.0.1:55044.service - OpenSSH per-connection server daemon (10.0.0.1:55044). Oct 28 13:07:20.712380 systemd-logind[1598]: Removed session 6. Oct 28 13:07:20.777480 sshd[1830]: Accepted publickey for core from 10.0.0.1 port 55044 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:07:20.778702 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:07:20.783142 systemd-logind[1598]: New session 7 of user core. Oct 28 13:07:20.793925 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 28 13:07:20.848261 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 28 13:07:20.848575 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 13:07:21.491685 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 28 13:07:21.529541 (dockerd)[1855]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 28 13:07:22.212977 dockerd[1855]: time="2025-10-28T13:07:22.212846034Z" level=info msg="Starting up" Oct 28 13:07:22.214165 dockerd[1855]: time="2025-10-28T13:07:22.214104004Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 28 13:07:22.256914 dockerd[1855]: time="2025-10-28T13:07:22.256745433Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 28 13:07:22.715126 dockerd[1855]: time="2025-10-28T13:07:22.715065151Z" level=info msg="Loading containers: start." Oct 28 13:07:22.725824 kernel: Initializing XFRM netlink socket Oct 28 13:07:23.003161 systemd-networkd[1522]: docker0: Link UP Oct 28 13:07:23.009461 dockerd[1855]: time="2025-10-28T13:07:23.009403972Z" level=info msg="Loading containers: done." Oct 28 13:07:23.045748 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck964001205-merged.mount: Deactivated successfully. Oct 28 13:07:23.047537 dockerd[1855]: time="2025-10-28T13:07:23.047486153Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 28 13:07:23.047620 dockerd[1855]: time="2025-10-28T13:07:23.047588196Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 28 13:07:23.047727 dockerd[1855]: time="2025-10-28T13:07:23.047710616Z" level=info msg="Initializing buildkit" Oct 28 13:07:23.076415 dockerd[1855]: time="2025-10-28T13:07:23.076370199Z" level=info msg="Completed buildkit initialization" Oct 28 13:07:23.082736 dockerd[1855]: time="2025-10-28T13:07:23.082695371Z" level=info msg="Daemon has completed initialization" Oct 28 13:07:23.082872 dockerd[1855]: time="2025-10-28T13:07:23.082793419Z" level=info msg="API listen on /run/docker.sock" Oct 28 13:07:23.083061 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 28 13:07:24.127164 containerd[1620]: time="2025-10-28T13:07:24.127107221Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 28 13:07:24.793052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535871065.mount: Deactivated successfully. Oct 28 13:07:27.200671 containerd[1620]: time="2025-10-28T13:07:27.200583246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:27.217808 containerd[1620]: time="2025-10-28T13:07:27.212479190Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=29420063" Oct 28 13:07:27.228529 containerd[1620]: time="2025-10-28T13:07:27.228487843Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:27.242674 containerd[1620]: time="2025-10-28T13:07:27.242625372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:27.247213 containerd[1620]: time="2025-10-28T13:07:27.247157840Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 3.120002041s" Oct 28 13:07:27.247276 containerd[1620]: time="2025-10-28T13:07:27.247216518Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 28 13:07:27.248256 containerd[1620]: time="2025-10-28T13:07:27.248232358Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 28 13:07:30.039068 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 28 13:07:30.040707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:07:30.423700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:07:30.443205 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 13:07:30.578267 containerd[1620]: time="2025-10-28T13:07:30.578184211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:30.579558 containerd[1620]: time="2025-10-28T13:07:30.579500205Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26012689" Oct 28 13:07:30.581321 containerd[1620]: time="2025-10-28T13:07:30.581266578Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:30.583727 containerd[1620]: time="2025-10-28T13:07:30.583688573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:30.585476 containerd[1620]: time="2025-10-28T13:07:30.585405582Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 3.337141355s" Oct 28 13:07:30.585476 containerd[1620]: time="2025-10-28T13:07:30.585455505Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 28 13:07:30.586494 containerd[1620]: time="2025-10-28T13:07:30.586451016Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 28 13:07:30.593316 kubelet[2146]: E1028 13:07:30.593257 2146 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 13:07:30.600435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 13:07:30.600664 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 13:07:30.601164 systemd[1]: kubelet.service: Consumed 321ms CPU time, 109.1M memory peak. Oct 28 13:07:32.054728 containerd[1620]: time="2025-10-28T13:07:32.054664379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:32.063012 containerd[1620]: time="2025-10-28T13:07:32.062963352Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20147431" Oct 28 13:07:32.079427 containerd[1620]: time="2025-10-28T13:07:32.079403083Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:32.100323 containerd[1620]: time="2025-10-28T13:07:32.100252491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:32.103654 containerd[1620]: time="2025-10-28T13:07:32.103605249Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.517117648s" Oct 28 13:07:32.103654 containerd[1620]: time="2025-10-28T13:07:32.103646682Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 28 13:07:32.104260 containerd[1620]: time="2025-10-28T13:07:32.104221869Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 28 13:07:33.353879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169141975.mount: Deactivated successfully. Oct 28 13:07:33.882141 containerd[1620]: time="2025-10-28T13:07:33.882044079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:33.901030 containerd[1620]: time="2025-10-28T13:07:33.900942679Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31925747" Oct 28 13:07:33.923067 containerd[1620]: time="2025-10-28T13:07:33.923001344Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:33.938134 containerd[1620]: time="2025-10-28T13:07:33.938063541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:33.938544 containerd[1620]: time="2025-10-28T13:07:33.938497025Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.834242707s" Oct 28 13:07:33.938544 containerd[1620]: time="2025-10-28T13:07:33.938541013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 28 13:07:33.939225 containerd[1620]: time="2025-10-28T13:07:33.939178493Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 28 13:07:34.672218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1660424639.mount: Deactivated successfully. Oct 28 13:07:35.326276 containerd[1620]: time="2025-10-28T13:07:35.326209245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:35.327089 containerd[1620]: time="2025-10-28T13:07:35.327068875Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Oct 28 13:07:35.328332 containerd[1620]: time="2025-10-28T13:07:35.328305373Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:35.331054 containerd[1620]: time="2025-10-28T13:07:35.331004754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:35.332096 containerd[1620]: time="2025-10-28T13:07:35.332063045Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.392845734s" Oct 28 13:07:35.332129 containerd[1620]: time="2025-10-28T13:07:35.332095556Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 28 13:07:35.332616 containerd[1620]: time="2025-10-28T13:07:35.332589331Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 28 13:07:35.818305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3971742445.mount: Deactivated successfully. Oct 28 13:07:35.823654 containerd[1620]: time="2025-10-28T13:07:35.823597202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 13:07:35.824630 containerd[1620]: time="2025-10-28T13:07:35.824597350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 28 13:07:35.826130 containerd[1620]: time="2025-10-28T13:07:35.826078727Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 13:07:35.828070 containerd[1620]: time="2025-10-28T13:07:35.828042643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 13:07:35.828516 containerd[1620]: time="2025-10-28T13:07:35.828493001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 495.876867ms" Oct 28 13:07:35.828587 containerd[1620]: time="2025-10-28T13:07:35.828521884Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 28 13:07:35.829427 containerd[1620]: time="2025-10-28T13:07:35.829393059Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 28 13:07:36.429837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1603941764.mount: Deactivated successfully. Oct 28 13:07:39.118620 containerd[1620]: time="2025-10-28T13:07:39.118536446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:39.227472 containerd[1620]: time="2025-10-28T13:07:39.227394401Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Oct 28 13:07:39.268727 containerd[1620]: time="2025-10-28T13:07:39.268678823Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:39.377177 containerd[1620]: time="2025-10-28T13:07:39.376982025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:07:39.378510 containerd[1620]: time="2025-10-28T13:07:39.378428852Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.548989788s" Oct 28 13:07:39.378510 containerd[1620]: time="2025-10-28T13:07:39.378498523Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 28 13:07:40.789144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 28 13:07:40.790974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:07:41.024564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:07:41.041142 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 13:07:41.103409 kubelet[2311]: E1028 13:07:41.103314 2311 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 13:07:41.107464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 13:07:41.107676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 13:07:41.108066 systemd[1]: kubelet.service: Consumed 251ms CPU time, 110.6M memory peak. Oct 28 13:07:43.183456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:07:43.183643 systemd[1]: kubelet.service: Consumed 251ms CPU time, 110.6M memory peak. Oct 28 13:07:43.185997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:07:43.211649 systemd[1]: Reload requested from client PID 2326 ('systemctl') (unit session-7.scope)... Oct 28 13:07:43.211666 systemd[1]: Reloading... Oct 28 13:07:43.319825 zram_generator::config[2373]: No configuration found. Oct 28 13:07:43.818205 systemd[1]: Reloading finished in 606 ms. Oct 28 13:07:43.886957 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 28 13:07:43.887130 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 28 13:07:43.887612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:07:43.887676 systemd[1]: kubelet.service: Consumed 148ms CPU time, 98.4M memory peak. Oct 28 13:07:43.891363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:07:44.172894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:07:44.190152 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 13:07:44.234413 kubelet[2419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 13:07:44.234413 kubelet[2419]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 13:07:44.234413 kubelet[2419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 13:07:44.234966 kubelet[2419]: I1028 13:07:44.234460 2419 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 13:07:44.627659 kubelet[2419]: I1028 13:07:44.627596 2419 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 28 13:07:44.627659 kubelet[2419]: I1028 13:07:44.627634 2419 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 13:07:44.627933 kubelet[2419]: I1028 13:07:44.627908 2419 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 13:07:44.659806 kubelet[2419]: I1028 13:07:44.659609 2419 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 13:07:44.660873 kubelet[2419]: E1028 13:07:44.660823 2419 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 28 13:07:44.668170 kubelet[2419]: I1028 13:07:44.668133 2419 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 13:07:44.674048 kubelet[2419]: I1028 13:07:44.674016 2419 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 28 13:07:44.674358 kubelet[2419]: I1028 13:07:44.674319 2419 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 13:07:44.674606 kubelet[2419]: I1028 13:07:44.674347 2419 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 13:07:44.674742 kubelet[2419]: I1028 13:07:44.674610 2419 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 13:07:44.674742 kubelet[2419]: I1028 13:07:44.674622 2419 container_manager_linux.go:303] "Creating device plugin manager" Oct 28 13:07:44.674839 kubelet[2419]: I1028 13:07:44.674811 2419 state_mem.go:36] "Initialized new in-memory state store" Oct 28 13:07:44.678072 kubelet[2419]: I1028 13:07:44.678040 2419 kubelet.go:480] "Attempting to sync node with API server" Oct 28 13:07:44.678072 kubelet[2419]: I1028 13:07:44.678066 2419 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 13:07:44.678144 kubelet[2419]: I1028 13:07:44.678106 2419 kubelet.go:386] "Adding apiserver pod source" Oct 28 13:07:44.678144 kubelet[2419]: I1028 13:07:44.678129 2419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 13:07:44.683216 kubelet[2419]: I1028 13:07:44.682483 2419 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Oct 28 13:07:44.683216 kubelet[2419]: E1028 13:07:44.682900 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 28 13:07:44.683216 kubelet[2419]: I1028 13:07:44.682997 2419 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 13:07:44.683216 kubelet[2419]: E1028 13:07:44.683031 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 28 13:07:44.683600 kubelet[2419]: W1028 13:07:44.683565 2419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 28 13:07:44.686250 kubelet[2419]: I1028 13:07:44.686220 2419 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 28 13:07:44.686402 kubelet[2419]: I1028 13:07:44.686276 2419 server.go:1289] "Started kubelet" Oct 28 13:07:44.687903 kubelet[2419]: I1028 13:07:44.687865 2419 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 13:07:44.688717 kubelet[2419]: I1028 13:07:44.688688 2419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 13:07:44.689626 kubelet[2419]: I1028 13:07:44.689609 2419 server.go:317] "Adding debug handlers to kubelet server" Oct 28 13:07:44.691118 kubelet[2419]: I1028 13:07:44.691105 2419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 13:07:44.691935 kubelet[2419]: I1028 13:07:44.691863 2419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 13:07:44.692343 kubelet[2419]: I1028 13:07:44.692320 2419 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 13:07:44.692623 kubelet[2419]: I1028 13:07:44.692593 2419 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 28 13:07:44.693037 kubelet[2419]: E1028 13:07:44.693008 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:07:44.694888 kubelet[2419]: I1028 13:07:44.694866 2419 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 28 13:07:44.694888 kubelet[2419]: I1028 13:07:44.694881 2419 reconciler.go:26] "Reconciler: start to sync state" Oct 28 13:07:44.695360 kubelet[2419]: E1028 13:07:44.695253 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="200ms" Oct 28 13:07:44.695360 kubelet[2419]: E1028 13:07:44.695288 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 13:07:44.695524 kubelet[2419]: I1028 13:07:44.695501 2419 factory.go:223] Registration of the systemd container factory successfully Oct 28 13:07:44.695625 kubelet[2419]: I1028 13:07:44.695584 2419 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 13:07:44.697145 kubelet[2419]: E1028 13:07:44.697122 2419 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 13:07:44.701555 kubelet[2419]: I1028 13:07:44.701334 2419 factory.go:223] Registration of the containerd container factory successfully Oct 28 13:07:44.708896 kubelet[2419]: E1028 13:07:44.707494 2419 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872a995b3c07e05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-28 13:07:44.686243333 +0000 UTC m=+0.491508875,LastTimestamp:2025-10-28 13:07:44.686243333 +0000 UTC m=+0.491508875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 28 13:07:44.716964 kubelet[2419]: I1028 13:07:44.716936 2419 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 13:07:44.716964 kubelet[2419]: I1028 13:07:44.716952 2419 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 13:07:44.717045 kubelet[2419]: I1028 13:07:44.716972 2419 state_mem.go:36] "Initialized new in-memory state store" Oct 28 13:07:44.718112 kubelet[2419]: I1028 13:07:44.718064 2419 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 28 13:07:44.719752 kubelet[2419]: I1028 13:07:44.719714 2419 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 28 13:07:44.719841 kubelet[2419]: I1028 13:07:44.719768 2419 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 28 13:07:44.719889 kubelet[2419]: I1028 13:07:44.719843 2419 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 13:07:44.719889 kubelet[2419]: I1028 13:07:44.719875 2419 kubelet.go:2436] "Starting kubelet main sync loop" Oct 28 13:07:44.719981 kubelet[2419]: E1028 13:07:44.719949 2419 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 13:07:44.721135 kubelet[2419]: E1028 13:07:44.720585 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 13:07:44.793851 kubelet[2419]: E1028 13:07:44.793731 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:07:44.820119 kubelet[2419]: E1028 13:07:44.820040 2419 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 28 13:07:44.894312 kubelet[2419]: E1028 13:07:44.894210 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:07:44.895839 kubelet[2419]: E1028 13:07:44.895809 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="400ms" Oct 28 13:07:44.902257 kubelet[2419]: I1028 13:07:44.902198 2419 policy_none.go:49] "None policy: Start" Oct 28 13:07:44.902318 kubelet[2419]: I1028 13:07:44.902273 2419 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 28 13:07:44.902318 kubelet[2419]: I1028 13:07:44.902301 2419 state_mem.go:35] "Initializing new in-memory state store" Oct 28 13:07:44.909819 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 28 13:07:44.932423 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 28 13:07:44.936094 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 28 13:07:44.950097 kubelet[2419]: E1028 13:07:44.950053 2419 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 13:07:44.950341 kubelet[2419]: I1028 13:07:44.950320 2419 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 13:07:44.950414 kubelet[2419]: I1028 13:07:44.950336 2419 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 13:07:44.950622 kubelet[2419]: I1028 13:07:44.950571 2419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 13:07:44.951633 kubelet[2419]: E1028 13:07:44.951608 2419 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 13:07:44.951707 kubelet[2419]: E1028 13:07:44.951662 2419 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 28 13:07:45.031635 systemd[1]: Created slice kubepods-burstable-podf1bc21e293b43ab0b9f5f6cd1df52603.slice - libcontainer container kubepods-burstable-podf1bc21e293b43ab0b9f5f6cd1df52603.slice. Oct 28 13:07:45.051537 kubelet[2419]: E1028 13:07:45.051447 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:07:45.051700 kubelet[2419]: I1028 13:07:45.051549 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:07:45.052039 kubelet[2419]: E1028 13:07:45.051998 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Oct 28 13:07:45.054364 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 28 13:07:45.056258 kubelet[2419]: E1028 13:07:45.056230 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:07:45.072031 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 28 13:07:45.074150 kubelet[2419]: E1028 13:07:45.074113 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:07:45.097628 kubelet[2419]: I1028 13:07:45.097582 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1bc21e293b43ab0b9f5f6cd1df52603-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1bc21e293b43ab0b9f5f6cd1df52603\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:45.097688 kubelet[2419]: I1028 13:07:45.097628 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1bc21e293b43ab0b9f5f6cd1df52603-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f1bc21e293b43ab0b9f5f6cd1df52603\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:45.097688 kubelet[2419]: I1028 13:07:45.097650 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:45.097739 kubelet[2419]: I1028 13:07:45.097697 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:45.097770 kubelet[2419]: I1028 13:07:45.097742 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:45.097770 kubelet[2419]: I1028 13:07:45.097762 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:45.097839 kubelet[2419]: I1028 13:07:45.097779 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 28 13:07:45.097839 kubelet[2419]: I1028 13:07:45.097811 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1bc21e293b43ab0b9f5f6cd1df52603-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1bc21e293b43ab0b9f5f6cd1df52603\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:45.097879 kubelet[2419]: I1028 13:07:45.097849 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:45.254111 kubelet[2419]: I1028 13:07:45.253985 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:07:45.254494 kubelet[2419]: E1028 13:07:45.254450 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Oct 28 13:07:45.296417 kubelet[2419]: E1028 13:07:45.296354 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="800ms" Oct 28 13:07:45.352958 kubelet[2419]: E1028 13:07:45.352901 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:45.353589 containerd[1620]: time="2025-10-28T13:07:45.353538638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f1bc21e293b43ab0b9f5f6cd1df52603,Namespace:kube-system,Attempt:0,}" Oct 28 13:07:45.356947 kubelet[2419]: E1028 13:07:45.356914 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:45.357545 containerd[1620]: time="2025-10-28T13:07:45.357503105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 28 13:07:45.375712 kubelet[2419]: E1028 13:07:45.374866 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:45.375771 containerd[1620]: time="2025-10-28T13:07:45.375489116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 28 13:07:45.403203 containerd[1620]: time="2025-10-28T13:07:45.403146217Z" level=info msg="connecting to shim 485330fd6a957602f565e4c597692ad587701e904f895820884e44cfeada81dc" address="unix:///run/containerd/s/6aa372b4f3bced22848887bea380a6047f5f53cbe826f77c25996e47d86b3511" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:07:45.406007 containerd[1620]: time="2025-10-28T13:07:45.405955906Z" level=info msg="connecting to shim 93515dc7b302d9f68c7e1846960abf6346b8ab4aec7124ddcbc7e2b64e8e4f0c" address="unix:///run/containerd/s/ea3a0a62cf1b0d1ee3bf7f7b1d03286da86246c27a58e7a0a7c354816c62fddd" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:07:45.439123 containerd[1620]: time="2025-10-28T13:07:45.439083151Z" level=info msg="connecting to shim 827dbf275a2fe637784865717147bb3402d00e365d765245b8dff09ed2300aa1" address="unix:///run/containerd/s/936b90fbd7d29c6a72fa71e49cc6025432a62f618dda7cde6d5696f2faf65e48" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:07:45.472950 systemd[1]: Started cri-containerd-93515dc7b302d9f68c7e1846960abf6346b8ab4aec7124ddcbc7e2b64e8e4f0c.scope - libcontainer container 93515dc7b302d9f68c7e1846960abf6346b8ab4aec7124ddcbc7e2b64e8e4f0c. Oct 28 13:07:45.477074 systemd[1]: Started cri-containerd-827dbf275a2fe637784865717147bb3402d00e365d765245b8dff09ed2300aa1.scope - libcontainer container 827dbf275a2fe637784865717147bb3402d00e365d765245b8dff09ed2300aa1. Oct 28 13:07:45.482472 systemd[1]: Started cri-containerd-485330fd6a957602f565e4c597692ad587701e904f895820884e44cfeada81dc.scope - libcontainer container 485330fd6a957602f565e4c597692ad587701e904f895820884e44cfeada81dc. Oct 28 13:07:45.593475 containerd[1620]: time="2025-10-28T13:07:45.593180416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"485330fd6a957602f565e4c597692ad587701e904f895820884e44cfeada81dc\"" Oct 28 13:07:45.594069 kubelet[2419]: E1028 13:07:45.594017 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 13:07:45.595271 containerd[1620]: time="2025-10-28T13:07:45.595251391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f1bc21e293b43ab0b9f5f6cd1df52603,Namespace:kube-system,Attempt:0,} returns sandbox id \"93515dc7b302d9f68c7e1846960abf6346b8ab4aec7124ddcbc7e2b64e8e4f0c\"" Oct 28 13:07:45.595322 kubelet[2419]: E1028 13:07:45.595276 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:45.596284 kubelet[2419]: E1028 13:07:45.596262 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:45.601823 containerd[1620]: time="2025-10-28T13:07:45.601754285Z" level=info msg="CreateContainer within sandbox \"93515dc7b302d9f68c7e1846960abf6346b8ab4aec7124ddcbc7e2b64e8e4f0c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 28 13:07:45.604197 containerd[1620]: time="2025-10-28T13:07:45.604172173Z" level=info msg="CreateContainer within sandbox \"485330fd6a957602f565e4c597692ad587701e904f895820884e44cfeada81dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 28 13:07:45.604714 containerd[1620]: time="2025-10-28T13:07:45.604687375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"827dbf275a2fe637784865717147bb3402d00e365d765245b8dff09ed2300aa1\"" Oct 28 13:07:45.605295 kubelet[2419]: E1028 13:07:45.605262 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:45.611028 containerd[1620]: time="2025-10-28T13:07:45.611001441Z" level=info msg="CreateContainer within sandbox \"827dbf275a2fe637784865717147bb3402d00e365d765245b8dff09ed2300aa1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 28 13:07:45.621748 containerd[1620]: time="2025-10-28T13:07:45.621710878Z" level=info msg="Container 339ec58ad9f21dc9289c444e40a17dec4e570b67c1445dd5f11a2fefe541f37d: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:07:45.627497 containerd[1620]: time="2025-10-28T13:07:45.627460297Z" level=info msg="Container 80951318b91b647feb72d6ae70445d4b55e5bf42be2777b1c0728a13bb7f50dd: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:07:45.633272 containerd[1620]: time="2025-10-28T13:07:45.633218838Z" level=info msg="CreateContainer within sandbox \"93515dc7b302d9f68c7e1846960abf6346b8ab4aec7124ddcbc7e2b64e8e4f0c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"339ec58ad9f21dc9289c444e40a17dec4e570b67c1445dd5f11a2fefe541f37d\"" Oct 28 13:07:45.633773 containerd[1620]: time="2025-10-28T13:07:45.633739538Z" level=info msg="StartContainer for \"339ec58ad9f21dc9289c444e40a17dec4e570b67c1445dd5f11a2fefe541f37d\"" Oct 28 13:07:45.634961 containerd[1620]: time="2025-10-28T13:07:45.634925828Z" level=info msg="connecting to shim 339ec58ad9f21dc9289c444e40a17dec4e570b67c1445dd5f11a2fefe541f37d" address="unix:///run/containerd/s/ea3a0a62cf1b0d1ee3bf7f7b1d03286da86246c27a58e7a0a7c354816c62fddd" protocol=ttrpc version=3 Oct 28 13:07:45.636183 containerd[1620]: time="2025-10-28T13:07:45.636137119Z" level=info msg="CreateContainer within sandbox \"485330fd6a957602f565e4c597692ad587701e904f895820884e44cfeada81dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"80951318b91b647feb72d6ae70445d4b55e5bf42be2777b1c0728a13bb7f50dd\"" Oct 28 13:07:45.636623 containerd[1620]: time="2025-10-28T13:07:45.636594928Z" level=info msg="StartContainer for \"80951318b91b647feb72d6ae70445d4b55e5bf42be2777b1c0728a13bb7f50dd\"" Oct 28 13:07:45.637759 containerd[1620]: time="2025-10-28T13:07:45.637729099Z" level=info msg="connecting to shim 80951318b91b647feb72d6ae70445d4b55e5bf42be2777b1c0728a13bb7f50dd" address="unix:///run/containerd/s/6aa372b4f3bced22848887bea380a6047f5f53cbe826f77c25996e47d86b3511" protocol=ttrpc version=3 Oct 28 13:07:45.643526 containerd[1620]: time="2025-10-28T13:07:45.643017385Z" level=info msg="Container ecaa754e4408a487503ffd37b18ed127f4f8ca6c08e226ce916d34b51b8c063c: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:07:45.655823 kubelet[2419]: I1028 13:07:45.655630 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:07:45.656135 kubelet[2419]: E1028 13:07:45.656090 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Oct 28 13:07:45.656195 containerd[1620]: time="2025-10-28T13:07:45.656119166Z" level=info msg="CreateContainer within sandbox \"827dbf275a2fe637784865717147bb3402d00e365d765245b8dff09ed2300aa1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ecaa754e4408a487503ffd37b18ed127f4f8ca6c08e226ce916d34b51b8c063c\"" Oct 28 13:07:45.656652 containerd[1620]: time="2025-10-28T13:07:45.656620681Z" level=info msg="StartContainer for \"ecaa754e4408a487503ffd37b18ed127f4f8ca6c08e226ce916d34b51b8c063c\"" Oct 28 13:07:45.657739 containerd[1620]: time="2025-10-28T13:07:45.657715851Z" level=info msg="connecting to shim ecaa754e4408a487503ffd37b18ed127f4f8ca6c08e226ce916d34b51b8c063c" address="unix:///run/containerd/s/936b90fbd7d29c6a72fa71e49cc6025432a62f618dda7cde6d5696f2faf65e48" protocol=ttrpc version=3 Oct 28 13:07:45.657936 systemd[1]: Started cri-containerd-339ec58ad9f21dc9289c444e40a17dec4e570b67c1445dd5f11a2fefe541f37d.scope - libcontainer container 339ec58ad9f21dc9289c444e40a17dec4e570b67c1445dd5f11a2fefe541f37d. Oct 28 13:07:45.671274 systemd[1]: Started cri-containerd-80951318b91b647feb72d6ae70445d4b55e5bf42be2777b1c0728a13bb7f50dd.scope - libcontainer container 80951318b91b647feb72d6ae70445d4b55e5bf42be2777b1c0728a13bb7f50dd. Oct 28 13:07:45.675366 kubelet[2419]: E1028 13:07:45.675334 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 13:07:45.681023 systemd[1]: Started cri-containerd-ecaa754e4408a487503ffd37b18ed127f4f8ca6c08e226ce916d34b51b8c063c.scope - libcontainer container ecaa754e4408a487503ffd37b18ed127f4f8ca6c08e226ce916d34b51b8c063c. Oct 28 13:07:45.742829 containerd[1620]: time="2025-10-28T13:07:45.742174587Z" level=info msg="StartContainer for \"339ec58ad9f21dc9289c444e40a17dec4e570b67c1445dd5f11a2fefe541f37d\" returns successfully" Oct 28 13:07:45.756030 containerd[1620]: time="2025-10-28T13:07:45.755991202Z" level=info msg="StartContainer for \"ecaa754e4408a487503ffd37b18ed127f4f8ca6c08e226ce916d34b51b8c063c\" returns successfully" Oct 28 13:07:45.761392 containerd[1620]: time="2025-10-28T13:07:45.761347988Z" level=info msg="StartContainer for \"80951318b91b647feb72d6ae70445d4b55e5bf42be2777b1c0728a13bb7f50dd\" returns successfully" Oct 28 13:07:46.458742 kubelet[2419]: I1028 13:07:46.458693 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:07:46.758982 kubelet[2419]: E1028 13:07:46.758725 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:07:46.758982 kubelet[2419]: E1028 13:07:46.758889 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:46.761798 kubelet[2419]: E1028 13:07:46.761440 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:07:46.761798 kubelet[2419]: E1028 13:07:46.761557 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:46.763550 kubelet[2419]: E1028 13:07:46.763525 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:07:46.763731 kubelet[2419]: E1028 13:07:46.763708 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:47.030576 kubelet[2419]: E1028 13:07:47.030078 2419 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 28 13:07:47.105629 kubelet[2419]: I1028 13:07:47.105547 2419 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 13:07:47.105629 kubelet[2419]: E1028 13:07:47.105613 2419 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 28 13:07:47.195273 kubelet[2419]: I1028 13:07:47.195229 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:47.200219 kubelet[2419]: E1028 13:07:47.200176 2419 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:47.200219 kubelet[2419]: I1028 13:07:47.200212 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:47.201605 kubelet[2419]: E1028 13:07:47.201573 2419 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:47.201605 kubelet[2419]: I1028 13:07:47.201591 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 13:07:47.202620 kubelet[2419]: E1028 13:07:47.202581 2419 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 28 13:07:47.679674 kubelet[2419]: I1028 13:07:47.679625 2419 apiserver.go:52] "Watching apiserver" Oct 28 13:07:47.695102 kubelet[2419]: I1028 13:07:47.695076 2419 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 28 13:07:47.764397 kubelet[2419]: I1028 13:07:47.764067 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:47.764397 kubelet[2419]: I1028 13:07:47.764159 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:47.764397 kubelet[2419]: I1028 13:07:47.764246 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 13:07:47.766002 kubelet[2419]: E1028 13:07:47.765974 2419 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:47.766071 kubelet[2419]: E1028 13:07:47.765986 2419 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:47.766178 kubelet[2419]: E1028 13:07:47.766118 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:47.766178 kubelet[2419]: E1028 13:07:47.766159 2419 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 28 13:07:47.766347 kubelet[2419]: E1028 13:07:47.766258 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:47.766347 kubelet[2419]: E1028 13:07:47.766266 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:48.765132 kubelet[2419]: I1028 13:07:48.765078 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:48.766211 kubelet[2419]: I1028 13:07:48.765230 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 13:07:48.766211 kubelet[2419]: I1028 13:07:48.765415 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:48.769169 kubelet[2419]: E1028 13:07:48.769129 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:48.771416 kubelet[2419]: E1028 13:07:48.771391 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:48.771834 kubelet[2419]: E1028 13:07:48.771778 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:49.136286 systemd[1]: Reload requested from client PID 2700 ('systemctl') (unit session-7.scope)... Oct 28 13:07:49.136351 systemd[1]: Reloading... Oct 28 13:07:49.239817 zram_generator::config[2744]: No configuration found. Oct 28 13:07:49.461701 systemd[1]: Reloading finished in 324 ms. Oct 28 13:07:49.493801 kubelet[2419]: I1028 13:07:49.491232 2419 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 13:07:49.491372 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:07:49.507081 systemd[1]: kubelet.service: Deactivated successfully. Oct 28 13:07:49.507410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:07:49.507476 systemd[1]: kubelet.service: Consumed 1.105s CPU time, 131.2M memory peak. Oct 28 13:07:49.509691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:07:49.733772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:07:49.750267 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 13:07:49.799718 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 13:07:49.799718 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 13:07:49.799718 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 13:07:49.800243 kubelet[2789]: I1028 13:07:49.799745 2789 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 13:07:49.807777 kubelet[2789]: I1028 13:07:49.807734 2789 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 28 13:07:49.807777 kubelet[2789]: I1028 13:07:49.807760 2789 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 13:07:49.808016 kubelet[2789]: I1028 13:07:49.807980 2789 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 13:07:49.809097 kubelet[2789]: I1028 13:07:49.809070 2789 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 28 13:07:49.811006 kubelet[2789]: I1028 13:07:49.810963 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 13:07:49.814973 kubelet[2789]: I1028 13:07:49.814942 2789 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 13:07:49.820148 kubelet[2789]: I1028 13:07:49.820125 2789 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 28 13:07:49.820373 kubelet[2789]: I1028 13:07:49.820346 2789 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 13:07:49.820515 kubelet[2789]: I1028 13:07:49.820368 2789 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 13:07:49.820606 kubelet[2789]: I1028 13:07:49.820523 2789 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 13:07:49.820606 kubelet[2789]: I1028 13:07:49.820532 2789 container_manager_linux.go:303] "Creating device plugin manager" Oct 28 13:07:49.820606 kubelet[2789]: I1028 13:07:49.820585 2789 state_mem.go:36] "Initialized new in-memory state store" Oct 28 13:07:49.820754 kubelet[2789]: I1028 13:07:49.820735 2789 kubelet.go:480] "Attempting to sync node with API server" Oct 28 13:07:49.820754 kubelet[2789]: I1028 13:07:49.820749 2789 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 13:07:49.820824 kubelet[2789]: I1028 13:07:49.820770 2789 kubelet.go:386] "Adding apiserver pod source" Oct 28 13:07:49.822638 kubelet[2789]: I1028 13:07:49.822251 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 13:07:49.825415 kubelet[2789]: I1028 13:07:49.825397 2789 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Oct 28 13:07:49.826212 kubelet[2789]: I1028 13:07:49.826197 2789 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 13:07:49.832404 kubelet[2789]: I1028 13:07:49.832383 2789 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 28 13:07:49.832570 kubelet[2789]: I1028 13:07:49.832559 2789 server.go:1289] "Started kubelet" Oct 28 13:07:49.832997 kubelet[2789]: I1028 13:07:49.832951 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 13:07:49.833307 kubelet[2789]: I1028 13:07:49.832872 2789 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 13:07:49.833449 kubelet[2789]: I1028 13:07:49.833434 2789 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 13:07:49.834798 kubelet[2789]: I1028 13:07:49.834171 2789 server.go:317] "Adding debug handlers to kubelet server" Oct 28 13:07:49.836153 kubelet[2789]: I1028 13:07:49.835961 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 13:07:49.836702 kubelet[2789]: I1028 13:07:49.836676 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 13:07:49.838268 kubelet[2789]: I1028 13:07:49.838226 2789 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 28 13:07:49.838652 kubelet[2789]: I1028 13:07:49.838623 2789 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 28 13:07:49.838859 kubelet[2789]: I1028 13:07:49.838848 2789 reconciler.go:26] "Reconciler: start to sync state" Oct 28 13:07:49.839219 kubelet[2789]: E1028 13:07:49.839199 2789 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 13:07:49.839489 kubelet[2789]: I1028 13:07:49.839417 2789 factory.go:223] Registration of the systemd container factory successfully Oct 28 13:07:49.839660 kubelet[2789]: I1028 13:07:49.839643 2789 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 13:07:49.841497 kubelet[2789]: I1028 13:07:49.841473 2789 factory.go:223] Registration of the containerd container factory successfully Oct 28 13:07:49.845331 kubelet[2789]: I1028 13:07:49.845293 2789 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 28 13:07:49.854291 kubelet[2789]: I1028 13:07:49.854249 2789 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 28 13:07:49.854291 kubelet[2789]: I1028 13:07:49.854271 2789 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 28 13:07:49.854291 kubelet[2789]: I1028 13:07:49.854289 2789 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 13:07:49.854291 kubelet[2789]: I1028 13:07:49.854297 2789 kubelet.go:2436] "Starting kubelet main sync loop" Oct 28 13:07:49.854480 kubelet[2789]: E1028 13:07:49.854336 2789 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 13:07:49.884011 kubelet[2789]: I1028 13:07:49.883978 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 13:07:49.884011 kubelet[2789]: I1028 13:07:49.883994 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 13:07:49.884011 kubelet[2789]: I1028 13:07:49.884014 2789 state_mem.go:36] "Initialized new in-memory state store" Oct 28 13:07:49.884198 kubelet[2789]: I1028 13:07:49.884140 2789 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 28 13:07:49.884198 kubelet[2789]: I1028 13:07:49.884152 2789 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 28 13:07:49.884198 kubelet[2789]: I1028 13:07:49.884167 2789 policy_none.go:49] "None policy: Start" Oct 28 13:07:49.884198 kubelet[2789]: I1028 13:07:49.884176 2789 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 28 13:07:49.884198 kubelet[2789]: I1028 13:07:49.884187 2789 state_mem.go:35] "Initializing new in-memory state store" Oct 28 13:07:49.884315 kubelet[2789]: I1028 13:07:49.884268 2789 state_mem.go:75] "Updated machine memory state" Oct 28 13:07:49.888393 kubelet[2789]: E1028 13:07:49.888362 2789 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 13:07:49.888581 kubelet[2789]: I1028 13:07:49.888568 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 13:07:49.888624 kubelet[2789]: I1028 13:07:49.888583 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 13:07:49.888905 kubelet[2789]: I1028 13:07:49.888879 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 13:07:49.890837 kubelet[2789]: E1028 13:07:49.890811 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 13:07:49.956202 kubelet[2789]: I1028 13:07:49.956147 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:49.956376 kubelet[2789]: I1028 13:07:49.956337 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 13:07:49.956608 kubelet[2789]: I1028 13:07:49.956344 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:49.961347 kubelet[2789]: E1028 13:07:49.961320 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:49.961623 kubelet[2789]: E1028 13:07:49.961593 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:49.961673 kubelet[2789]: E1028 13:07:49.961595 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 28 13:07:49.999052 kubelet[2789]: I1028 13:07:49.998930 2789 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:07:50.006470 kubelet[2789]: I1028 13:07:50.006434 2789 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 28 13:07:50.006596 kubelet[2789]: I1028 13:07:50.006514 2789 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 13:07:50.140183 kubelet[2789]: I1028 13:07:50.140137 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:50.140183 kubelet[2789]: I1028 13:07:50.140172 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 28 13:07:50.140183 kubelet[2789]: I1028 13:07:50.140195 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1bc21e293b43ab0b9f5f6cd1df52603-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1bc21e293b43ab0b9f5f6cd1df52603\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:50.140390 kubelet[2789]: I1028 13:07:50.140210 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:50.140390 kubelet[2789]: I1028 13:07:50.140301 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:50.140390 kubelet[2789]: I1028 13:07:50.140357 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1bc21e293b43ab0b9f5f6cd1df52603-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1bc21e293b43ab0b9f5f6cd1df52603\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:50.140644 kubelet[2789]: I1028 13:07:50.140397 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1bc21e293b43ab0b9f5f6cd1df52603-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f1bc21e293b43ab0b9f5f6cd1df52603\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:07:50.140698 kubelet[2789]: I1028 13:07:50.140660 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:50.140698 kubelet[2789]: I1028 13:07:50.140687 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:50.141138 sudo[2830]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 28 13:07:50.141474 sudo[2830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 28 13:07:50.262348 kubelet[2789]: E1028 13:07:50.262160 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:50.262348 kubelet[2789]: E1028 13:07:50.262233 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:50.262348 kubelet[2789]: E1028 13:07:50.262345 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:50.439798 sudo[2830]: pam_unix(sudo:session): session closed for user root Oct 28 13:07:50.825358 kubelet[2789]: I1028 13:07:50.824904 2789 apiserver.go:52] "Watching apiserver" Oct 28 13:07:50.839162 kubelet[2789]: I1028 13:07:50.839119 2789 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 28 13:07:50.865625 kubelet[2789]: E1028 13:07:50.865593 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:50.866222 kubelet[2789]: I1028 13:07:50.866190 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 13:07:50.866666 kubelet[2789]: I1028 13:07:50.866305 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:51.022438 kubelet[2789]: E1028 13:07:51.022390 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 28 13:07:51.022614 kubelet[2789]: E1028 13:07:51.022564 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:51.022977 kubelet[2789]: E1028 13:07:51.022390 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:07:51.022977 kubelet[2789]: E1028 13:07:51.022832 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:51.029771 kubelet[2789]: I1028 13:07:51.029696 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.029678726 podStartE2EDuration="3.029678726s" podCreationTimestamp="2025-10-28 13:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:07:51.022635411 +0000 UTC m=+1.262581362" watchObservedRunningTime="2025-10-28 13:07:51.029678726 +0000 UTC m=+1.269624677" Oct 28 13:07:51.036197 kubelet[2789]: I1028 13:07:51.036135 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.036126286 podStartE2EDuration="3.036126286s" podCreationTimestamp="2025-10-28 13:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:07:51.029824335 +0000 UTC m=+1.269770286" watchObservedRunningTime="2025-10-28 13:07:51.036126286 +0000 UTC m=+1.276072227" Oct 28 13:07:51.042737 kubelet[2789]: I1028 13:07:51.042505 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.042481621 podStartE2EDuration="3.042481621s" podCreationTimestamp="2025-10-28 13:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:07:51.03621119 +0000 UTC m=+1.276157141" watchObservedRunningTime="2025-10-28 13:07:51.042481621 +0000 UTC m=+1.282427573" Oct 28 13:07:51.657886 sudo[1834]: pam_unix(sudo:session): session closed for user root Oct 28 13:07:51.660234 sshd[1833]: Connection closed by 10.0.0.1 port 55044 Oct 28 13:07:51.660894 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Oct 28 13:07:51.665699 systemd[1]: sshd@6-10.0.0.28:22-10.0.0.1:55044.service: Deactivated successfully. Oct 28 13:07:51.667989 systemd[1]: session-7.scope: Deactivated successfully. Oct 28 13:07:51.668198 systemd[1]: session-7.scope: Consumed 5.950s CPU time, 257.1M memory peak. Oct 28 13:07:51.669523 systemd-logind[1598]: Session 7 logged out. Waiting for processes to exit. Oct 28 13:07:51.670804 systemd-logind[1598]: Removed session 7. Oct 28 13:07:51.867326 kubelet[2789]: E1028 13:07:51.867243 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:51.867326 kubelet[2789]: E1028 13:07:51.867265 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:51.867326 kubelet[2789]: E1028 13:07:51.867242 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:52.867999 kubelet[2789]: E1028 13:07:52.867946 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:54.808263 kubelet[2789]: I1028 13:07:54.808218 2789 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 28 13:07:54.808702 kubelet[2789]: I1028 13:07:54.808682 2789 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 28 13:07:54.808729 containerd[1620]: time="2025-10-28T13:07:54.808519976Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 28 13:07:55.444205 systemd[1]: Created slice kubepods-besteffort-podd3b8684e_30b8_49e3_8bc8_704bdefc1b2a.slice - libcontainer container kubepods-besteffort-podd3b8684e_30b8_49e3_8bc8_704bdefc1b2a.slice. Oct 28 13:07:55.465985 systemd[1]: Created slice kubepods-burstable-podd4897ced_5a6a_4744_b3de_17f0816f0e4a.slice - libcontainer container kubepods-burstable-podd4897ced_5a6a_4744_b3de_17f0816f0e4a.slice. Oct 28 13:07:55.571334 kubelet[2789]: I1028 13:07:55.571284 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cilium-config-path\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571334 kubelet[2789]: I1028 13:07:55.571318 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-host-proc-sys-net\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571334 kubelet[2789]: I1028 13:07:55.571345 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-host-proc-sys-kernel\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571630 kubelet[2789]: I1028 13:07:55.571360 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4897ced-5a6a-4744-b3de-17f0816f0e4a-hubble-tls\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571630 kubelet[2789]: I1028 13:07:55.571389 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3b8684e-30b8-49e3-8bc8-704bdefc1b2a-lib-modules\") pod \"kube-proxy-l6prb\" (UID: \"d3b8684e-30b8-49e3-8bc8-704bdefc1b2a\") " pod="kube-system/kube-proxy-l6prb" Oct 28 13:07:55.571630 kubelet[2789]: I1028 13:07:55.571406 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm5sf\" (UniqueName: \"kubernetes.io/projected/d3b8684e-30b8-49e3-8bc8-704bdefc1b2a-kube-api-access-jm5sf\") pod \"kube-proxy-l6prb\" (UID: \"d3b8684e-30b8-49e3-8bc8-704bdefc1b2a\") " pod="kube-system/kube-proxy-l6prb" Oct 28 13:07:55.571630 kubelet[2789]: I1028 13:07:55.571422 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppw6d\" (UniqueName: \"kubernetes.io/projected/d4897ced-5a6a-4744-b3de-17f0816f0e4a-kube-api-access-ppw6d\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571630 kubelet[2789]: I1028 13:07:55.571472 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-hostproc\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571630 kubelet[2789]: I1028 13:07:55.571499 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cni-path\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571766 kubelet[2789]: I1028 13:07:55.571517 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-lib-modules\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571766 kubelet[2789]: I1028 13:07:55.571533 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4897ced-5a6a-4744-b3de-17f0816f0e4a-clustermesh-secrets\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571766 kubelet[2789]: I1028 13:07:55.571551 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cilium-run\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571766 kubelet[2789]: I1028 13:07:55.571576 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d3b8684e-30b8-49e3-8bc8-704bdefc1b2a-kube-proxy\") pod \"kube-proxy-l6prb\" (UID: \"d3b8684e-30b8-49e3-8bc8-704bdefc1b2a\") " pod="kube-system/kube-proxy-l6prb" Oct 28 13:07:55.571766 kubelet[2789]: I1028 13:07:55.571592 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cilium-cgroup\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571766 kubelet[2789]: I1028 13:07:55.571610 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-etc-cni-netd\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571942 kubelet[2789]: I1028 13:07:55.571623 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-xtables-lock\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.571942 kubelet[2789]: I1028 13:07:55.571639 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3b8684e-30b8-49e3-8bc8-704bdefc1b2a-xtables-lock\") pod \"kube-proxy-l6prb\" (UID: \"d3b8684e-30b8-49e3-8bc8-704bdefc1b2a\") " pod="kube-system/kube-proxy-l6prb" Oct 28 13:07:55.571942 kubelet[2789]: I1028 13:07:55.571652 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-bpf-maps\") pod \"cilium-w7p79\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " pod="kube-system/cilium-w7p79" Oct 28 13:07:55.764637 kubelet[2789]: E1028 13:07:55.764467 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:55.765118 containerd[1620]: time="2025-10-28T13:07:55.765074444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l6prb,Uid:d3b8684e-30b8-49e3-8bc8-704bdefc1b2a,Namespace:kube-system,Attempt:0,}" Oct 28 13:07:55.769150 kubelet[2789]: E1028 13:07:55.769096 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:55.769967 containerd[1620]: time="2025-10-28T13:07:55.769895181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7p79,Uid:d4897ced-5a6a-4744-b3de-17f0816f0e4a,Namespace:kube-system,Attempt:0,}" Oct 28 13:07:55.823658 containerd[1620]: time="2025-10-28T13:07:55.823552089Z" level=info msg="connecting to shim 66d71e02011bf445a8017dc5d7b4ea1ed87d035b0eae8f9d332f64300fc09a3e" address="unix:///run/containerd/s/b633e45f5bc627c68c492ab8f2a16f24cc3402accbdf146ba532ce31bfeab985" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:07:55.826242 containerd[1620]: time="2025-10-28T13:07:55.826082001Z" level=info msg="connecting to shim 6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6" address="unix:///run/containerd/s/83dde9cdd060dfe45ab4e133a9a01c1bd879b5362494c079fcb09a0edde2e239" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:07:55.884954 systemd[1]: Started cri-containerd-66d71e02011bf445a8017dc5d7b4ea1ed87d035b0eae8f9d332f64300fc09a3e.scope - libcontainer container 66d71e02011bf445a8017dc5d7b4ea1ed87d035b0eae8f9d332f64300fc09a3e. Oct 28 13:07:55.886864 systemd[1]: Started cri-containerd-6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6.scope - libcontainer container 6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6. Oct 28 13:07:55.920146 containerd[1620]: time="2025-10-28T13:07:55.920078729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l6prb,Uid:d3b8684e-30b8-49e3-8bc8-704bdefc1b2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"66d71e02011bf445a8017dc5d7b4ea1ed87d035b0eae8f9d332f64300fc09a3e\"" Oct 28 13:07:55.921282 kubelet[2789]: E1028 13:07:55.921246 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:55.927095 containerd[1620]: time="2025-10-28T13:07:55.927047219Z" level=info msg="CreateContainer within sandbox \"66d71e02011bf445a8017dc5d7b4ea1ed87d035b0eae8f9d332f64300fc09a3e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 28 13:07:55.934035 containerd[1620]: time="2025-10-28T13:07:55.933973238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7p79,Uid:d4897ced-5a6a-4744-b3de-17f0816f0e4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\"" Oct 28 13:07:55.934652 kubelet[2789]: E1028 13:07:55.934619 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:55.935848 containerd[1620]: time="2025-10-28T13:07:55.935817120Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 28 13:07:55.942721 containerd[1620]: time="2025-10-28T13:07:55.942693565Z" level=info msg="Container 310ca12fc4e2ef6ce383c616effeae4a6ef9362c0b666c627ea058a7532162da: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:07:55.951038 containerd[1620]: time="2025-10-28T13:07:55.950987697Z" level=info msg="CreateContainer within sandbox \"66d71e02011bf445a8017dc5d7b4ea1ed87d035b0eae8f9d332f64300fc09a3e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"310ca12fc4e2ef6ce383c616effeae4a6ef9362c0b666c627ea058a7532162da\"" Oct 28 13:07:55.951465 containerd[1620]: time="2025-10-28T13:07:55.951437175Z" level=info msg="StartContainer for \"310ca12fc4e2ef6ce383c616effeae4a6ef9362c0b666c627ea058a7532162da\"" Oct 28 13:07:55.953100 containerd[1620]: time="2025-10-28T13:07:55.953063903Z" level=info msg="connecting to shim 310ca12fc4e2ef6ce383c616effeae4a6ef9362c0b666c627ea058a7532162da" address="unix:///run/containerd/s/b633e45f5bc627c68c492ab8f2a16f24cc3402accbdf146ba532ce31bfeab985" protocol=ttrpc version=3 Oct 28 13:07:55.973072 systemd[1]: Started cri-containerd-310ca12fc4e2ef6ce383c616effeae4a6ef9362c0b666c627ea058a7532162da.scope - libcontainer container 310ca12fc4e2ef6ce383c616effeae4a6ef9362c0b666c627ea058a7532162da. Oct 28 13:07:56.000416 systemd[1]: Created slice kubepods-besteffort-pod55d41802_eff1_4994_a239_a9e0d26953df.slice - libcontainer container kubepods-besteffort-pod55d41802_eff1_4994_a239_a9e0d26953df.slice. Oct 28 13:07:56.042204 containerd[1620]: time="2025-10-28T13:07:56.042172307Z" level=info msg="StartContainer for \"310ca12fc4e2ef6ce383c616effeae4a6ef9362c0b666c627ea058a7532162da\" returns successfully" Oct 28 13:07:56.078226 kubelet[2789]: I1028 13:07:56.078158 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55d41802-eff1-4994-a239-a9e0d26953df-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hft6h\" (UID: \"55d41802-eff1-4994-a239-a9e0d26953df\") " pod="kube-system/cilium-operator-6c4d7847fc-hft6h" Oct 28 13:07:56.078226 kubelet[2789]: I1028 13:07:56.078225 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qddgf\" (UniqueName: \"kubernetes.io/projected/55d41802-eff1-4994-a239-a9e0d26953df-kube-api-access-qddgf\") pod \"cilium-operator-6c4d7847fc-hft6h\" (UID: \"55d41802-eff1-4994-a239-a9e0d26953df\") " pod="kube-system/cilium-operator-6c4d7847fc-hft6h" Oct 28 13:07:56.304199 kubelet[2789]: E1028 13:07:56.304045 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:56.304518 containerd[1620]: time="2025-10-28T13:07:56.304472530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hft6h,Uid:55d41802-eff1-4994-a239-a9e0d26953df,Namespace:kube-system,Attempt:0,}" Oct 28 13:07:56.325229 containerd[1620]: time="2025-10-28T13:07:56.325180115Z" level=info msg="connecting to shim 280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad" address="unix:///run/containerd/s/68742f63e0016dd164a10627ab1d66aee0f2fe1ec551c8ddbe15222d6d3b495e" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:07:56.349985 systemd[1]: Started cri-containerd-280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad.scope - libcontainer container 280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad. Oct 28 13:07:56.400602 containerd[1620]: time="2025-10-28T13:07:56.400559894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hft6h,Uid:55d41802-eff1-4994-a239-a9e0d26953df,Namespace:kube-system,Attempt:0,} returns sandbox id \"280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad\"" Oct 28 13:07:56.402059 kubelet[2789]: E1028 13:07:56.402023 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:56.883003 kubelet[2789]: E1028 13:07:56.882964 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:56.892324 kubelet[2789]: I1028 13:07:56.892261 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l6prb" podStartSLOduration=1.892244136 podStartE2EDuration="1.892244136s" podCreationTimestamp="2025-10-28 13:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:07:56.892137573 +0000 UTC m=+7.132083524" watchObservedRunningTime="2025-10-28 13:07:56.892244136 +0000 UTC m=+7.132190087" Oct 28 13:07:58.805085 kubelet[2789]: E1028 13:07:58.804975 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:07:58.888551 kubelet[2789]: E1028 13:07:58.888491 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:00.255856 kubelet[2789]: E1028 13:08:00.255544 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:00.892640 kubelet[2789]: E1028 13:08:00.892524 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:01.557633 kubelet[2789]: E1028 13:08:01.557519 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:01.893689 kubelet[2789]: E1028 13:08:01.893559 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:01.894108 kubelet[2789]: E1028 13:08:01.894074 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:02.257002 update_engine[1599]: I20251028 13:08:02.256731 1599 update_attempter.cc:509] Updating boot flags... Oct 28 13:08:04.410639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2081831163.mount: Deactivated successfully. Oct 28 13:08:06.381215 containerd[1620]: time="2025-10-28T13:08:06.381132737Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:08:06.381986 containerd[1620]: time="2025-10-28T13:08:06.381937452Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=155041231" Oct 28 13:08:06.383125 containerd[1620]: time="2025-10-28T13:08:06.383089835Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:08:06.385229 containerd[1620]: time="2025-10-28T13:08:06.385192841Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.449337448s" Oct 28 13:08:06.385266 containerd[1620]: time="2025-10-28T13:08:06.385228307Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 28 13:08:06.386280 containerd[1620]: time="2025-10-28T13:08:06.386252900Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 28 13:08:06.389075 containerd[1620]: time="2025-10-28T13:08:06.389030843Z" level=info msg="CreateContainer within sandbox \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 28 13:08:06.397951 containerd[1620]: time="2025-10-28T13:08:06.397894500Z" level=info msg="Container 6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:08:06.404173 containerd[1620]: time="2025-10-28T13:08:06.404139615Z" level=info msg="CreateContainer within sandbox \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\"" Oct 28 13:08:06.404649 containerd[1620]: time="2025-10-28T13:08:06.404576833Z" level=info msg="StartContainer for \"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\"" Oct 28 13:08:06.405628 containerd[1620]: time="2025-10-28T13:08:06.405602487Z" level=info msg="connecting to shim 6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110" address="unix:///run/containerd/s/83dde9cdd060dfe45ab4e133a9a01c1bd879b5362494c079fcb09a0edde2e239" protocol=ttrpc version=3 Oct 28 13:08:06.428013 systemd[1]: Started cri-containerd-6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110.scope - libcontainer container 6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110. Oct 28 13:08:06.460327 containerd[1620]: time="2025-10-28T13:08:06.460285972Z" level=info msg="StartContainer for \"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\" returns successfully" Oct 28 13:08:06.473117 systemd[1]: cri-containerd-6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110.scope: Deactivated successfully. Oct 28 13:08:06.476202 containerd[1620]: time="2025-10-28T13:08:06.476156999Z" level=info msg="received exit event container_id:\"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\" id:\"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\" pid:3236 exited_at:{seconds:1761656886 nanos:475540170}" Oct 28 13:08:06.497139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110-rootfs.mount: Deactivated successfully. Oct 28 13:08:06.904594 kubelet[2789]: E1028 13:08:06.904543 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:06.914218 containerd[1620]: time="2025-10-28T13:08:06.914160581Z" level=info msg="CreateContainer within sandbox \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 28 13:08:06.926777 containerd[1620]: time="2025-10-28T13:08:06.926718047Z" level=info msg="Container 3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:08:06.932323 containerd[1620]: time="2025-10-28T13:08:06.932287121Z" level=info msg="CreateContainer within sandbox \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\"" Oct 28 13:08:06.932872 containerd[1620]: time="2025-10-28T13:08:06.932812105Z" level=info msg="StartContainer for \"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\"" Oct 28 13:08:06.934994 containerd[1620]: time="2025-10-28T13:08:06.934924298Z" level=info msg="connecting to shim 3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23" address="unix:///run/containerd/s/83dde9cdd060dfe45ab4e133a9a01c1bd879b5362494c079fcb09a0edde2e239" protocol=ttrpc version=3 Oct 28 13:08:06.955930 systemd[1]: Started cri-containerd-3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23.scope - libcontainer container 3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23. Oct 28 13:08:06.987822 containerd[1620]: time="2025-10-28T13:08:06.987744263Z" level=info msg="StartContainer for \"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\" returns successfully" Oct 28 13:08:07.005520 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 28 13:08:07.005983 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 28 13:08:07.006107 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 28 13:08:07.008673 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 13:08:07.010923 systemd[1]: cri-containerd-3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23.scope: Deactivated successfully. Oct 28 13:08:07.011552 containerd[1620]: time="2025-10-28T13:08:07.011533403Z" level=info msg="received exit event container_id:\"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\" id:\"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\" pid:3283 exited_at:{seconds:1761656887 nanos:11343934}" Oct 28 13:08:07.039845 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 13:08:07.653433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264624381.mount: Deactivated successfully. Oct 28 13:08:07.908600 kubelet[2789]: E1028 13:08:07.908435 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:07.912996 containerd[1620]: time="2025-10-28T13:08:07.912952823Z" level=info msg="CreateContainer within sandbox \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 28 13:08:07.995151 containerd[1620]: time="2025-10-28T13:08:07.993534303Z" level=info msg="Container e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:08:08.050871 containerd[1620]: time="2025-10-28T13:08:08.050822570Z" level=info msg="CreateContainer within sandbox \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\"" Oct 28 13:08:08.051387 containerd[1620]: time="2025-10-28T13:08:08.051336333Z" level=info msg="StartContainer for \"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\"" Oct 28 13:08:08.052654 containerd[1620]: time="2025-10-28T13:08:08.052628810Z" level=info msg="connecting to shim e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13" address="unix:///run/containerd/s/83dde9cdd060dfe45ab4e133a9a01c1bd879b5362494c079fcb09a0edde2e239" protocol=ttrpc version=3 Oct 28 13:08:08.068342 containerd[1620]: time="2025-10-28T13:08:08.068139375Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:08:08.069741 containerd[1620]: time="2025-10-28T13:08:08.069698988Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=0" Oct 28 13:08:08.071799 containerd[1620]: time="2025-10-28T13:08:08.071753187Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:08:08.073249 containerd[1620]: time="2025-10-28T13:08:08.073210466Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.686925306s" Oct 28 13:08:08.073319 containerd[1620]: time="2025-10-28T13:08:08.073251193Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 28 13:08:08.077006 containerd[1620]: time="2025-10-28T13:08:08.076976707Z" level=info msg="CreateContainer within sandbox \"280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 28 13:08:08.079958 systemd[1]: Started cri-containerd-e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13.scope - libcontainer container e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13. Oct 28 13:08:08.084590 containerd[1620]: time="2025-10-28T13:08:08.084549794Z" level=info msg="Container 5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:08:08.092490 containerd[1620]: time="2025-10-28T13:08:08.092441385Z" level=info msg="CreateContainer within sandbox \"280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\"" Oct 28 13:08:08.094520 containerd[1620]: time="2025-10-28T13:08:08.094431784Z" level=info msg="StartContainer for \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\"" Oct 28 13:08:08.095424 containerd[1620]: time="2025-10-28T13:08:08.095402050Z" level=info msg="connecting to shim 5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4" address="unix:///run/containerd/s/68742f63e0016dd164a10627ab1d66aee0f2fe1ec551c8ddbe15222d6d3b495e" protocol=ttrpc version=3 Oct 28 13:08:08.131300 systemd[1]: Started cri-containerd-5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4.scope - libcontainer container 5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4. Oct 28 13:08:08.139402 systemd[1]: cri-containerd-e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13.scope: Deactivated successfully. Oct 28 13:08:08.141353 containerd[1620]: time="2025-10-28T13:08:08.141155743Z" level=info msg="StartContainer for \"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\" returns successfully" Oct 28 13:08:08.141353 containerd[1620]: time="2025-10-28T13:08:08.141280760Z" level=info msg="received exit event container_id:\"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\" id:\"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\" pid:3347 exited_at:{seconds:1761656888 nanos:140892496}" Oct 28 13:08:08.184432 containerd[1620]: time="2025-10-28T13:08:08.183991873Z" level=info msg="StartContainer for \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\" returns successfully" Oct 28 13:08:08.945530 kubelet[2789]: E1028 13:08:08.944417 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:08.948176 kubelet[2789]: E1028 13:08:08.948155 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:08.952027 containerd[1620]: time="2025-10-28T13:08:08.951987381Z" level=info msg="CreateContainer within sandbox \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 28 13:08:08.954227 kubelet[2789]: I1028 13:08:08.954161 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hft6h" podStartSLOduration=2.283262899 podStartE2EDuration="13.954126991s" podCreationTimestamp="2025-10-28 13:07:55 +0000 UTC" firstStartedPulling="2025-10-28 13:07:56.403128264 +0000 UTC m=+6.643074215" lastFinishedPulling="2025-10-28 13:08:08.073992356 +0000 UTC m=+18.313938307" observedRunningTime="2025-10-28 13:08:08.953593541 +0000 UTC m=+19.193539492" watchObservedRunningTime="2025-10-28 13:08:08.954126991 +0000 UTC m=+19.194072942" Oct 28 13:08:08.975214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510562516.mount: Deactivated successfully. Oct 28 13:08:08.987864 containerd[1620]: time="2025-10-28T13:08:08.974875574Z" level=info msg="Container 80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:08:08.995229 containerd[1620]: time="2025-10-28T13:08:08.995179875Z" level=info msg="CreateContainer within sandbox \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\"" Oct 28 13:08:08.996136 containerd[1620]: time="2025-10-28T13:08:08.996103173Z" level=info msg="StartContainer for \"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\"" Oct 28 13:08:09.017660 containerd[1620]: time="2025-10-28T13:08:09.017597764Z" level=info msg="connecting to shim 80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444" address="unix:///run/containerd/s/83dde9cdd060dfe45ab4e133a9a01c1bd879b5362494c079fcb09a0edde2e239" protocol=ttrpc version=3 Oct 28 13:08:09.037942 systemd[1]: Started cri-containerd-80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444.scope - libcontainer container 80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444. Oct 28 13:08:09.065759 systemd[1]: cri-containerd-80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444.scope: Deactivated successfully. Oct 28 13:08:09.068135 containerd[1620]: time="2025-10-28T13:08:09.068092112Z" level=info msg="received exit event container_id:\"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\" id:\"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\" pid:3422 exited_at:{seconds:1761656889 nanos:65913690}" Oct 28 13:08:09.078526 containerd[1620]: time="2025-10-28T13:08:09.078476093Z" level=info msg="StartContainer for \"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\" returns successfully" Oct 28 13:08:09.399537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444-rootfs.mount: Deactivated successfully. Oct 28 13:08:09.953286 kubelet[2789]: E1028 13:08:09.953211 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:09.954049 kubelet[2789]: E1028 13:08:09.953311 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:09.957322 containerd[1620]: time="2025-10-28T13:08:09.957252284Z" level=info msg="CreateContainer within sandbox \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 28 13:08:09.973560 containerd[1620]: time="2025-10-28T13:08:09.973514409Z" level=info msg="Container 654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:08:09.976687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472979132.mount: Deactivated successfully. Oct 28 13:08:09.982907 containerd[1620]: time="2025-10-28T13:08:09.982838004Z" level=info msg="CreateContainer within sandbox \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\"" Oct 28 13:08:09.983613 containerd[1620]: time="2025-10-28T13:08:09.983558457Z" level=info msg="StartContainer for \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\"" Oct 28 13:08:09.985066 containerd[1620]: time="2025-10-28T13:08:09.985031124Z" level=info msg="connecting to shim 654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00" address="unix:///run/containerd/s/83dde9cdd060dfe45ab4e133a9a01c1bd879b5362494c079fcb09a0edde2e239" protocol=ttrpc version=3 Oct 28 13:08:10.013021 systemd[1]: Started cri-containerd-654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00.scope - libcontainer container 654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00. Oct 28 13:08:10.057101 containerd[1620]: time="2025-10-28T13:08:10.056975724Z" level=info msg="StartContainer for \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\" returns successfully" Oct 28 13:08:10.274979 kubelet[2789]: I1028 13:08:10.274848 2789 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 28 13:08:10.330246 systemd[1]: Created slice kubepods-burstable-pod278f9f77_0c5a_47bc_b5f0_9d421e88c978.slice - libcontainer container kubepods-burstable-pod278f9f77_0c5a_47bc_b5f0_9d421e88c978.slice. Oct 28 13:08:10.336588 systemd[1]: Created slice kubepods-burstable-podf4bf70e4_c23c_456b_997b_0fc2fc73e78c.slice - libcontainer container kubepods-burstable-podf4bf70e4_c23c_456b_997b_0fc2fc73e78c.slice. Oct 28 13:08:10.375996 kubelet[2789]: I1028 13:08:10.375932 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j5ll\" (UniqueName: \"kubernetes.io/projected/278f9f77-0c5a-47bc-b5f0-9d421e88c978-kube-api-access-8j5ll\") pod \"coredns-674b8bbfcf-wgxnk\" (UID: \"278f9f77-0c5a-47bc-b5f0-9d421e88c978\") " pod="kube-system/coredns-674b8bbfcf-wgxnk" Oct 28 13:08:10.375996 kubelet[2789]: I1028 13:08:10.375989 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4bf70e4-c23c-456b-997b-0fc2fc73e78c-config-volume\") pod \"coredns-674b8bbfcf-vl478\" (UID: \"f4bf70e4-c23c-456b-997b-0fc2fc73e78c\") " pod="kube-system/coredns-674b8bbfcf-vl478" Oct 28 13:08:10.375996 kubelet[2789]: I1028 13:08:10.376011 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/278f9f77-0c5a-47bc-b5f0-9d421e88c978-config-volume\") pod \"coredns-674b8bbfcf-wgxnk\" (UID: \"278f9f77-0c5a-47bc-b5f0-9d421e88c978\") " pod="kube-system/coredns-674b8bbfcf-wgxnk" Oct 28 13:08:10.376228 kubelet[2789]: I1028 13:08:10.376025 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gtg7\" (UniqueName: \"kubernetes.io/projected/f4bf70e4-c23c-456b-997b-0fc2fc73e78c-kube-api-access-2gtg7\") pod \"coredns-674b8bbfcf-vl478\" (UID: \"f4bf70e4-c23c-456b-997b-0fc2fc73e78c\") " pod="kube-system/coredns-674b8bbfcf-vl478" Oct 28 13:08:10.633668 kubelet[2789]: E1028 13:08:10.633603 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:10.634618 containerd[1620]: time="2025-10-28T13:08:10.634573997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wgxnk,Uid:278f9f77-0c5a-47bc-b5f0-9d421e88c978,Namespace:kube-system,Attempt:0,}" Oct 28 13:08:10.639799 kubelet[2789]: E1028 13:08:10.639743 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:10.643469 containerd[1620]: time="2025-10-28T13:08:10.643430030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vl478,Uid:f4bf70e4-c23c-456b-997b-0fc2fc73e78c,Namespace:kube-system,Attempt:0,}" Oct 28 13:08:10.960373 kubelet[2789]: E1028 13:08:10.960243 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:10.976485 kubelet[2789]: I1028 13:08:10.976397 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w7p79" podStartSLOduration=5.5257812600000005 podStartE2EDuration="15.976351028s" podCreationTimestamp="2025-10-28 13:07:55 +0000 UTC" firstStartedPulling="2025-10-28 13:07:55.935505184 +0000 UTC m=+6.175451135" lastFinishedPulling="2025-10-28 13:08:06.386074952 +0000 UTC m=+16.626020903" observedRunningTime="2025-10-28 13:08:10.975834269 +0000 UTC m=+21.215780221" watchObservedRunningTime="2025-10-28 13:08:10.976351028 +0000 UTC m=+21.216296979" Oct 28 13:08:11.962505 kubelet[2789]: E1028 13:08:11.962456 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:12.291870 systemd-networkd[1522]: cilium_host: Link UP Oct 28 13:08:12.292114 systemd-networkd[1522]: cilium_net: Link UP Oct 28 13:08:12.292394 systemd-networkd[1522]: cilium_host: Gained carrier Oct 28 13:08:12.292652 systemd-networkd[1522]: cilium_net: Gained carrier Oct 28 13:08:12.350928 systemd-networkd[1522]: cilium_host: Gained IPv6LL Oct 28 13:08:12.396999 systemd-networkd[1522]: cilium_vxlan: Link UP Oct 28 13:08:12.397010 systemd-networkd[1522]: cilium_vxlan: Gained carrier Oct 28 13:08:12.606815 kernel: NET: Registered PF_ALG protocol family Oct 28 13:08:12.964844 kubelet[2789]: E1028 13:08:12.964743 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:13.001037 systemd-networkd[1522]: cilium_net: Gained IPv6LL Oct 28 13:08:13.255207 systemd-networkd[1522]: lxc_health: Link UP Oct 28 13:08:13.255516 systemd-networkd[1522]: lxc_health: Gained carrier Oct 28 13:08:13.642464 systemd-networkd[1522]: cilium_vxlan: Gained IPv6LL Oct 28 13:08:13.684904 systemd-networkd[1522]: lxcb741faf205d5: Link UP Oct 28 13:08:13.686831 kernel: eth0: renamed from tmp08079 Oct 28 13:08:13.690203 systemd-networkd[1522]: lxcb741faf205d5: Gained carrier Oct 28 13:08:13.690897 systemd-networkd[1522]: lxc2af874b0b5a9: Link UP Oct 28 13:08:13.705820 kernel: eth0: renamed from tmp3ca38 Oct 28 13:08:13.709559 systemd-networkd[1522]: lxc2af874b0b5a9: Gained carrier Oct 28 13:08:13.966365 kubelet[2789]: E1028 13:08:13.966221 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:14.986010 systemd-networkd[1522]: lxc2af874b0b5a9: Gained IPv6LL Oct 28 13:08:15.048997 systemd-networkd[1522]: lxc_health: Gained IPv6LL Oct 28 13:08:15.241039 systemd-networkd[1522]: lxcb741faf205d5: Gained IPv6LL Oct 28 13:08:16.413539 systemd[1]: Started sshd@7-10.0.0.28:22-10.0.0.1:48446.service - OpenSSH per-connection server daemon (10.0.0.1:48446). Oct 28 13:08:16.487054 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 48446 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:16.488618 sshd-session[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:16.494265 systemd-logind[1598]: New session 8 of user core. Oct 28 13:08:16.507930 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 28 13:08:16.716386 sshd[3967]: Connection closed by 10.0.0.1 port 48446 Oct 28 13:08:16.716989 sshd-session[3964]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:16.721606 systemd[1]: sshd@7-10.0.0.28:22-10.0.0.1:48446.service: Deactivated successfully. Oct 28 13:08:16.723483 systemd[1]: session-8.scope: Deactivated successfully. Oct 28 13:08:16.724251 systemd-logind[1598]: Session 8 logged out. Waiting for processes to exit. Oct 28 13:08:16.725590 systemd-logind[1598]: Removed session 8. Oct 28 13:08:17.190496 containerd[1620]: time="2025-10-28T13:08:17.190439610Z" level=info msg="connecting to shim 3ca38aca997a600a62b31a66993db6e7bcf31370be2eb7e3365e5e60dc5afcbf" address="unix:///run/containerd/s/8c602b3d4c81242fd668aa16cb43ee6eab3a67c8eea9f4f37403b1d721fe29f2" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:08:17.191655 containerd[1620]: time="2025-10-28T13:08:17.191627131Z" level=info msg="connecting to shim 080791c5dce0026a033f2bd22c1651769f8989b42b4296ddf0b63ad42f5c7a04" address="unix:///run/containerd/s/0189f4ffc15c4e63ad63f737df9dfef6d5c720e81632741667ca4359b5b9b069" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:08:17.220996 systemd[1]: Started cri-containerd-080791c5dce0026a033f2bd22c1651769f8989b42b4296ddf0b63ad42f5c7a04.scope - libcontainer container 080791c5dce0026a033f2bd22c1651769f8989b42b4296ddf0b63ad42f5c7a04. Oct 28 13:08:17.225240 systemd[1]: Started cri-containerd-3ca38aca997a600a62b31a66993db6e7bcf31370be2eb7e3365e5e60dc5afcbf.scope - libcontainer container 3ca38aca997a600a62b31a66993db6e7bcf31370be2eb7e3365e5e60dc5afcbf. Oct 28 13:08:17.235311 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:08:17.239476 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:08:17.267395 containerd[1620]: time="2025-10-28T13:08:17.267346991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wgxnk,Uid:278f9f77-0c5a-47bc-b5f0-9d421e88c978,Namespace:kube-system,Attempt:0,} returns sandbox id \"080791c5dce0026a033f2bd22c1651769f8989b42b4296ddf0b63ad42f5c7a04\"" Oct 28 13:08:17.272211 kubelet[2789]: E1028 13:08:17.272183 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:17.277914 containerd[1620]: time="2025-10-28T13:08:17.277871117Z" level=info msg="CreateContainer within sandbox \"080791c5dce0026a033f2bd22c1651769f8989b42b4296ddf0b63ad42f5c7a04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 13:08:17.278625 containerd[1620]: time="2025-10-28T13:08:17.278596446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vl478,Uid:f4bf70e4-c23c-456b-997b-0fc2fc73e78c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ca38aca997a600a62b31a66993db6e7bcf31370be2eb7e3365e5e60dc5afcbf\"" Oct 28 13:08:17.279333 kubelet[2789]: E1028 13:08:17.279307 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:17.283417 containerd[1620]: time="2025-10-28T13:08:17.283375708Z" level=info msg="CreateContainer within sandbox \"3ca38aca997a600a62b31a66993db6e7bcf31370be2eb7e3365e5e60dc5afcbf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 13:08:17.292033 containerd[1620]: time="2025-10-28T13:08:17.291993795Z" level=info msg="Container 49738a926bbb72555d51d166c5302d7f76bdc9949f7d3b13b03f231dbfe585c1: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:08:17.293892 containerd[1620]: time="2025-10-28T13:08:17.293828528Z" level=info msg="Container 90064bf53f4f439221e8432abbca59c4854504c1f0f4f6142848b64f7cbaece2: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:08:17.303124 containerd[1620]: time="2025-10-28T13:08:17.303016582Z" level=info msg="CreateContainer within sandbox \"080791c5dce0026a033f2bd22c1651769f8989b42b4296ddf0b63ad42f5c7a04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"49738a926bbb72555d51d166c5302d7f76bdc9949f7d3b13b03f231dbfe585c1\"" Oct 28 13:08:17.303508 containerd[1620]: time="2025-10-28T13:08:17.303479336Z" level=info msg="StartContainer for \"49738a926bbb72555d51d166c5302d7f76bdc9949f7d3b13b03f231dbfe585c1\"" Oct 28 13:08:17.304345 containerd[1620]: time="2025-10-28T13:08:17.304310514Z" level=info msg="connecting to shim 49738a926bbb72555d51d166c5302d7f76bdc9949f7d3b13b03f231dbfe585c1" address="unix:///run/containerd/s/0189f4ffc15c4e63ad63f737df9dfef6d5c720e81632741667ca4359b5b9b069" protocol=ttrpc version=3 Oct 28 13:08:17.309411 containerd[1620]: time="2025-10-28T13:08:17.309374994Z" level=info msg="CreateContainer within sandbox \"3ca38aca997a600a62b31a66993db6e7bcf31370be2eb7e3365e5e60dc5afcbf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90064bf53f4f439221e8432abbca59c4854504c1f0f4f6142848b64f7cbaece2\"" Oct 28 13:08:17.310744 containerd[1620]: time="2025-10-28T13:08:17.309981439Z" level=info msg="StartContainer for \"90064bf53f4f439221e8432abbca59c4854504c1f0f4f6142848b64f7cbaece2\"" Oct 28 13:08:17.311034 containerd[1620]: time="2025-10-28T13:08:17.310971197Z" level=info msg="connecting to shim 90064bf53f4f439221e8432abbca59c4854504c1f0f4f6142848b64f7cbaece2" address="unix:///run/containerd/s/8c602b3d4c81242fd668aa16cb43ee6eab3a67c8eea9f4f37403b1d721fe29f2" protocol=ttrpc version=3 Oct 28 13:08:17.327923 systemd[1]: Started cri-containerd-49738a926bbb72555d51d166c5302d7f76bdc9949f7d3b13b03f231dbfe585c1.scope - libcontainer container 49738a926bbb72555d51d166c5302d7f76bdc9949f7d3b13b03f231dbfe585c1. Oct 28 13:08:17.331995 systemd[1]: Started cri-containerd-90064bf53f4f439221e8432abbca59c4854504c1f0f4f6142848b64f7cbaece2.scope - libcontainer container 90064bf53f4f439221e8432abbca59c4854504c1f0f4f6142848b64f7cbaece2. Oct 28 13:08:17.367175 containerd[1620]: time="2025-10-28T13:08:17.367133370Z" level=info msg="StartContainer for \"49738a926bbb72555d51d166c5302d7f76bdc9949f7d3b13b03f231dbfe585c1\" returns successfully" Oct 28 13:08:17.372070 containerd[1620]: time="2025-10-28T13:08:17.372024222Z" level=info msg="StartContainer for \"90064bf53f4f439221e8432abbca59c4854504c1f0f4f6142848b64f7cbaece2\" returns successfully" Oct 28 13:08:17.979974 kubelet[2789]: E1028 13:08:17.979684 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:17.981935 kubelet[2789]: E1028 13:08:17.981909 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:17.990754 kubelet[2789]: I1028 13:08:17.990589 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wgxnk" podStartSLOduration=22.990571362 podStartE2EDuration="22.990571362s" podCreationTimestamp="2025-10-28 13:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:08:17.988655456 +0000 UTC m=+28.228601407" watchObservedRunningTime="2025-10-28 13:08:17.990571362 +0000 UTC m=+28.230517303" Oct 28 13:08:18.011359 kubelet[2789]: I1028 13:08:18.011269 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vl478" podStartSLOduration=23.011247637 podStartE2EDuration="23.011247637s" podCreationTimestamp="2025-10-28 13:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:08:18.01025849 +0000 UTC m=+28.250204441" watchObservedRunningTime="2025-10-28 13:08:18.011247637 +0000 UTC m=+28.251193588" Oct 28 13:08:18.184753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3720644716.mount: Deactivated successfully. Oct 28 13:08:18.985990 kubelet[2789]: E1028 13:08:18.985521 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:19.988132 kubelet[2789]: E1028 13:08:19.988092 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:21.731686 systemd[1]: Started sshd@8-10.0.0.28:22-10.0.0.1:48460.service - OpenSSH per-connection server daemon (10.0.0.1:48460). Oct 28 13:08:21.775477 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 48460 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:21.776680 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:21.781020 systemd-logind[1598]: New session 9 of user core. Oct 28 13:08:21.798907 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 28 13:08:21.930693 sshd[4161]: Connection closed by 10.0.0.1 port 48460 Oct 28 13:08:21.931008 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:21.934983 systemd[1]: sshd@8-10.0.0.28:22-10.0.0.1:48460.service: Deactivated successfully. Oct 28 13:08:21.936852 systemd[1]: session-9.scope: Deactivated successfully. Oct 28 13:08:21.937654 systemd-logind[1598]: Session 9 logged out. Waiting for processes to exit. Oct 28 13:08:21.938703 systemd-logind[1598]: Removed session 9. Oct 28 13:08:23.598309 kubelet[2789]: I1028 13:08:23.598208 2789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 13:08:23.598932 kubelet[2789]: E1028 13:08:23.598908 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:23.996546 kubelet[2789]: E1028 13:08:23.996391 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:26.944524 systemd[1]: Started sshd@9-10.0.0.28:22-10.0.0.1:56244.service - OpenSSH per-connection server daemon (10.0.0.1:56244). Oct 28 13:08:27.012428 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 56244 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:27.014167 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:27.018972 systemd-logind[1598]: New session 10 of user core. Oct 28 13:08:27.030967 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 28 13:08:27.151863 sshd[4181]: Connection closed by 10.0.0.1 port 56244 Oct 28 13:08:27.153052 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:27.158058 systemd[1]: sshd@9-10.0.0.28:22-10.0.0.1:56244.service: Deactivated successfully. Oct 28 13:08:27.159990 systemd[1]: session-10.scope: Deactivated successfully. Oct 28 13:08:27.160894 systemd-logind[1598]: Session 10 logged out. Waiting for processes to exit. Oct 28 13:08:27.162198 systemd-logind[1598]: Removed session 10. Oct 28 13:08:27.983186 kubelet[2789]: E1028 13:08:27.983083 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:28.013769 kubelet[2789]: E1028 13:08:28.013731 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:08:32.168607 systemd[1]: Started sshd@10-10.0.0.28:22-10.0.0.1:56250.service - OpenSSH per-connection server daemon (10.0.0.1:56250). Oct 28 13:08:32.231403 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 56250 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:32.232582 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:32.236767 systemd-logind[1598]: New session 11 of user core. Oct 28 13:08:32.251902 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 28 13:08:32.370919 sshd[4202]: Connection closed by 10.0.0.1 port 56250 Oct 28 13:08:32.371243 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:32.375350 systemd[1]: sshd@10-10.0.0.28:22-10.0.0.1:56250.service: Deactivated successfully. Oct 28 13:08:32.377236 systemd[1]: session-11.scope: Deactivated successfully. Oct 28 13:08:32.378043 systemd-logind[1598]: Session 11 logged out. Waiting for processes to exit. Oct 28 13:08:32.379222 systemd-logind[1598]: Removed session 11. Oct 28 13:08:37.390903 systemd[1]: Started sshd@11-10.0.0.28:22-10.0.0.1:46396.service - OpenSSH per-connection server daemon (10.0.0.1:46396). Oct 28 13:08:37.461114 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 46396 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:37.462880 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:37.468455 systemd-logind[1598]: New session 12 of user core. Oct 28 13:08:37.483104 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 28 13:08:37.610268 sshd[4220]: Connection closed by 10.0.0.1 port 46396 Oct 28 13:08:37.610761 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:37.620771 systemd[1]: sshd@11-10.0.0.28:22-10.0.0.1:46396.service: Deactivated successfully. Oct 28 13:08:37.622725 systemd[1]: session-12.scope: Deactivated successfully. Oct 28 13:08:37.623930 systemd-logind[1598]: Session 12 logged out. Waiting for processes to exit. Oct 28 13:08:37.626615 systemd[1]: Started sshd@12-10.0.0.28:22-10.0.0.1:46398.service - OpenSSH per-connection server daemon (10.0.0.1:46398). Oct 28 13:08:37.627385 systemd-logind[1598]: Removed session 12. Oct 28 13:08:37.691815 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 46398 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:37.693691 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:37.699411 systemd-logind[1598]: New session 13 of user core. Oct 28 13:08:37.707109 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 28 13:08:37.869086 sshd[4237]: Connection closed by 10.0.0.1 port 46398 Oct 28 13:08:37.869744 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:37.883162 systemd[1]: sshd@12-10.0.0.28:22-10.0.0.1:46398.service: Deactivated successfully. Oct 28 13:08:37.885761 systemd[1]: session-13.scope: Deactivated successfully. Oct 28 13:08:37.888324 systemd-logind[1598]: Session 13 logged out. Waiting for processes to exit. Oct 28 13:08:37.892646 systemd[1]: Started sshd@13-10.0.0.28:22-10.0.0.1:46402.service - OpenSSH per-connection server daemon (10.0.0.1:46402). Oct 28 13:08:37.893857 systemd-logind[1598]: Removed session 13. Oct 28 13:08:37.942088 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 46402 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:37.943715 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:37.948524 systemd-logind[1598]: New session 14 of user core. Oct 28 13:08:37.962966 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 28 13:08:38.086649 sshd[4252]: Connection closed by 10.0.0.1 port 46402 Oct 28 13:08:38.086959 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:38.092848 systemd[1]: sshd@13-10.0.0.28:22-10.0.0.1:46402.service: Deactivated successfully. Oct 28 13:08:38.094944 systemd[1]: session-14.scope: Deactivated successfully. Oct 28 13:08:38.095698 systemd-logind[1598]: Session 14 logged out. Waiting for processes to exit. Oct 28 13:08:38.096859 systemd-logind[1598]: Removed session 14. Oct 28 13:08:43.104016 systemd[1]: Started sshd@14-10.0.0.28:22-10.0.0.1:46416.service - OpenSSH per-connection server daemon (10.0.0.1:46416). Oct 28 13:08:43.171767 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 46416 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:43.173857 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:43.179836 systemd-logind[1598]: New session 15 of user core. Oct 28 13:08:43.196078 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 28 13:08:43.323185 sshd[4269]: Connection closed by 10.0.0.1 port 46416 Oct 28 13:08:43.323529 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:43.327933 systemd[1]: sshd@14-10.0.0.28:22-10.0.0.1:46416.service: Deactivated successfully. Oct 28 13:08:43.330146 systemd[1]: session-15.scope: Deactivated successfully. Oct 28 13:08:43.331034 systemd-logind[1598]: Session 15 logged out. Waiting for processes to exit. Oct 28 13:08:43.332657 systemd-logind[1598]: Removed session 15. Oct 28 13:08:48.342656 systemd[1]: Started sshd@15-10.0.0.28:22-10.0.0.1:43578.service - OpenSSH per-connection server daemon (10.0.0.1:43578). Oct 28 13:08:48.399584 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 43578 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:48.401444 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:48.406421 systemd-logind[1598]: New session 16 of user core. Oct 28 13:08:48.416954 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 28 13:08:48.546950 sshd[4286]: Connection closed by 10.0.0.1 port 43578 Oct 28 13:08:48.547558 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:48.563347 systemd[1]: sshd@15-10.0.0.28:22-10.0.0.1:43578.service: Deactivated successfully. Oct 28 13:08:48.566207 systemd[1]: session-16.scope: Deactivated successfully. Oct 28 13:08:48.567278 systemd-logind[1598]: Session 16 logged out. Waiting for processes to exit. Oct 28 13:08:48.572196 systemd[1]: Started sshd@16-10.0.0.28:22-10.0.0.1:43582.service - OpenSSH per-connection server daemon (10.0.0.1:43582). Oct 28 13:08:48.573026 systemd-logind[1598]: Removed session 16. Oct 28 13:08:48.626157 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 43582 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:48.627626 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:48.634280 systemd-logind[1598]: New session 17 of user core. Oct 28 13:08:48.647920 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 28 13:08:49.034144 sshd[4302]: Connection closed by 10.0.0.1 port 43582 Oct 28 13:08:49.034597 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:49.047519 systemd[1]: sshd@16-10.0.0.28:22-10.0.0.1:43582.service: Deactivated successfully. Oct 28 13:08:49.049407 systemd[1]: session-17.scope: Deactivated successfully. Oct 28 13:08:49.050250 systemd-logind[1598]: Session 17 logged out. Waiting for processes to exit. Oct 28 13:08:49.052885 systemd[1]: Started sshd@17-10.0.0.28:22-10.0.0.1:43590.service - OpenSSH per-connection server daemon (10.0.0.1:43590). Oct 28 13:08:49.053560 systemd-logind[1598]: Removed session 17. Oct 28 13:08:49.114582 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 43590 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:49.115828 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:49.120554 systemd-logind[1598]: New session 18 of user core. Oct 28 13:08:49.126933 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 28 13:08:50.255093 sshd[4317]: Connection closed by 10.0.0.1 port 43590 Oct 28 13:08:50.255454 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:50.269539 systemd[1]: sshd@17-10.0.0.28:22-10.0.0.1:43590.service: Deactivated successfully. Oct 28 13:08:50.272702 systemd[1]: session-18.scope: Deactivated successfully. Oct 28 13:08:50.274721 systemd-logind[1598]: Session 18 logged out. Waiting for processes to exit. Oct 28 13:08:50.277873 systemd[1]: Started sshd@18-10.0.0.28:22-10.0.0.1:43604.service - OpenSSH per-connection server daemon (10.0.0.1:43604). Oct 28 13:08:50.278555 systemd-logind[1598]: Removed session 18. Oct 28 13:08:50.337262 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 43604 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:50.338810 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:50.343611 systemd-logind[1598]: New session 19 of user core. Oct 28 13:08:50.361921 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 28 13:08:50.668285 sshd[4341]: Connection closed by 10.0.0.1 port 43604 Oct 28 13:08:50.668826 sshd-session[4338]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:50.682186 systemd[1]: sshd@18-10.0.0.28:22-10.0.0.1:43604.service: Deactivated successfully. Oct 28 13:08:50.686797 systemd[1]: session-19.scope: Deactivated successfully. Oct 28 13:08:50.687998 systemd-logind[1598]: Session 19 logged out. Waiting for processes to exit. Oct 28 13:08:50.691494 systemd[1]: Started sshd@19-10.0.0.28:22-10.0.0.1:43618.service - OpenSSH per-connection server daemon (10.0.0.1:43618). Oct 28 13:08:50.692466 systemd-logind[1598]: Removed session 19. Oct 28 13:08:50.752829 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 43618 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:50.754536 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:50.759579 systemd-logind[1598]: New session 20 of user core. Oct 28 13:08:50.772077 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 28 13:08:50.887760 sshd[4356]: Connection closed by 10.0.0.1 port 43618 Oct 28 13:08:50.888125 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:50.892348 systemd[1]: sshd@19-10.0.0.28:22-10.0.0.1:43618.service: Deactivated successfully. Oct 28 13:08:50.894841 systemd[1]: session-20.scope: Deactivated successfully. Oct 28 13:08:50.898564 systemd-logind[1598]: Session 20 logged out. Waiting for processes to exit. Oct 28 13:08:50.899996 systemd-logind[1598]: Removed session 20. Oct 28 13:08:55.903605 systemd[1]: Started sshd@20-10.0.0.28:22-10.0.0.1:43630.service - OpenSSH per-connection server daemon (10.0.0.1:43630). Oct 28 13:08:55.957616 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 43630 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:08:55.959255 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:08:55.964168 systemd-logind[1598]: New session 21 of user core. Oct 28 13:08:55.973948 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 28 13:08:56.081614 sshd[4372]: Connection closed by 10.0.0.1 port 43630 Oct 28 13:08:56.081957 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Oct 28 13:08:56.086947 systemd[1]: sshd@20-10.0.0.28:22-10.0.0.1:43630.service: Deactivated successfully. Oct 28 13:08:56.088812 systemd[1]: session-21.scope: Deactivated successfully. Oct 28 13:08:56.089601 systemd-logind[1598]: Session 21 logged out. Waiting for processes to exit. Oct 28 13:08:56.090695 systemd-logind[1598]: Removed session 21. Oct 28 13:09:01.105249 systemd[1]: Started sshd@21-10.0.0.28:22-10.0.0.1:46694.service - OpenSSH per-connection server daemon (10.0.0.1:46694). Oct 28 13:09:01.165996 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 46694 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:09:01.167636 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:09:01.172090 systemd-logind[1598]: New session 22 of user core. Oct 28 13:09:01.176927 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 28 13:09:01.284338 sshd[4393]: Connection closed by 10.0.0.1 port 46694 Oct 28 13:09:01.284686 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Oct 28 13:09:01.288744 systemd[1]: sshd@21-10.0.0.28:22-10.0.0.1:46694.service: Deactivated successfully. Oct 28 13:09:01.290744 systemd[1]: session-22.scope: Deactivated successfully. Oct 28 13:09:01.291648 systemd-logind[1598]: Session 22 logged out. Waiting for processes to exit. Oct 28 13:09:01.292953 systemd-logind[1598]: Removed session 22. Oct 28 13:09:04.855408 kubelet[2789]: E1028 13:09:04.855352 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:06.296185 systemd[1]: Started sshd@22-10.0.0.28:22-10.0.0.1:45654.service - OpenSSH per-connection server daemon (10.0.0.1:45654). Oct 28 13:09:06.348874 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 45654 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:09:06.350487 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:09:06.354978 systemd-logind[1598]: New session 23 of user core. Oct 28 13:09:06.365909 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 28 13:09:06.535462 sshd[4409]: Connection closed by 10.0.0.1 port 45654 Oct 28 13:09:06.535922 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Oct 28 13:09:06.547263 systemd[1]: sshd@22-10.0.0.28:22-10.0.0.1:45654.service: Deactivated successfully. Oct 28 13:09:06.548966 systemd[1]: session-23.scope: Deactivated successfully. Oct 28 13:09:06.549677 systemd-logind[1598]: Session 23 logged out. Waiting for processes to exit. Oct 28 13:09:06.552143 systemd[1]: Started sshd@23-10.0.0.28:22-10.0.0.1:45656.service - OpenSSH per-connection server daemon (10.0.0.1:45656). Oct 28 13:09:06.553117 systemd-logind[1598]: Removed session 23. Oct 28 13:09:06.607744 sshd[4422]: Accepted publickey for core from 10.0.0.1 port 45656 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:09:06.609765 sshd-session[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:09:06.614129 systemd-logind[1598]: New session 24 of user core. Oct 28 13:09:06.622908 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 28 13:09:08.221075 containerd[1620]: time="2025-10-28T13:09:08.220999798Z" level=info msg="StopContainer for \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\" with timeout 30 (s)" Oct 28 13:09:08.221572 containerd[1620]: time="2025-10-28T13:09:08.221547579Z" level=info msg="Stop container \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\" with signal terminated" Oct 28 13:09:08.240121 systemd[1]: cri-containerd-5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4.scope: Deactivated successfully. Oct 28 13:09:08.243359 containerd[1620]: time="2025-10-28T13:09:08.243308619Z" level=info msg="received exit event container_id:\"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\" id:\"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\" pid:3375 exited_at:{seconds:1761656948 nanos:241080385}" Oct 28 13:09:08.255817 containerd[1620]: time="2025-10-28T13:09:08.255707060Z" level=info msg="StopContainer for \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\" with timeout 2 (s)" Oct 28 13:09:08.256285 containerd[1620]: time="2025-10-28T13:09:08.256248099Z" level=info msg="Stop container \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\" with signal terminated" Oct 28 13:09:08.257804 containerd[1620]: time="2025-10-28T13:09:08.257674665Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 28 13:09:08.265148 systemd-networkd[1522]: lxc_health: Link DOWN Oct 28 13:09:08.265160 systemd-networkd[1522]: lxc_health: Lost carrier Oct 28 13:09:08.281692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4-rootfs.mount: Deactivated successfully. Oct 28 13:09:08.291454 systemd[1]: cri-containerd-654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00.scope: Deactivated successfully. Oct 28 13:09:08.292125 systemd[1]: cri-containerd-654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00.scope: Consumed 6.601s CPU time, 124.4M memory peak, 360K read from disk, 13.3M written to disk. Oct 28 13:09:08.292318 containerd[1620]: time="2025-10-28T13:09:08.292195719Z" level=info msg="received exit event container_id:\"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\" id:\"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\" pid:3459 exited_at:{seconds:1761656948 nanos:291893209}" Oct 28 13:09:08.298425 containerd[1620]: time="2025-10-28T13:09:08.298228912Z" level=info msg="StopContainer for \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\" returns successfully" Oct 28 13:09:08.299104 containerd[1620]: time="2025-10-28T13:09:08.299075125Z" level=info msg="StopPodSandbox for \"280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad\"" Oct 28 13:09:08.299184 containerd[1620]: time="2025-10-28T13:09:08.299142474Z" level=info msg="Container to stop \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 28 13:09:08.308576 systemd[1]: cri-containerd-280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad.scope: Deactivated successfully. Oct 28 13:09:08.323063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00-rootfs.mount: Deactivated successfully. Oct 28 13:09:08.344551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad-rootfs.mount: Deactivated successfully. Oct 28 13:09:08.355898 containerd[1620]: time="2025-10-28T13:09:08.355858361Z" level=info msg="shim disconnected" id=280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad namespace=k8s.io Oct 28 13:09:08.356127 containerd[1620]: time="2025-10-28T13:09:08.356091038Z" level=info msg="cleaning up after shim disconnected" id=280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad namespace=k8s.io Oct 28 13:09:08.366406 containerd[1620]: time="2025-10-28T13:09:08.356110746Z" level=info msg="cleaning up dead shim" id=280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad namespace=k8s.io Oct 28 13:09:08.366480 containerd[1620]: time="2025-10-28T13:09:08.357410419Z" level=info msg="StopContainer for \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\" returns successfully" Oct 28 13:09:08.368102 containerd[1620]: time="2025-10-28T13:09:08.368052301Z" level=info msg="StopPodSandbox for \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\"" Oct 28 13:09:08.368164 containerd[1620]: time="2025-10-28T13:09:08.368148285Z" level=info msg="Container to stop \"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 28 13:09:08.368195 containerd[1620]: time="2025-10-28T13:09:08.368166901Z" level=info msg="Container to stop \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 28 13:09:08.368195 containerd[1620]: time="2025-10-28T13:09:08.368178463Z" level=info msg="Container to stop \"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 28 13:09:08.368195 containerd[1620]: time="2025-10-28T13:09:08.368187600Z" level=info msg="Container to stop \"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 28 13:09:08.368269 containerd[1620]: time="2025-10-28T13:09:08.368195847Z" level=info msg="Container to stop \"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 28 13:09:08.376621 systemd[1]: cri-containerd-6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6.scope: Deactivated successfully. Oct 28 13:09:08.395378 containerd[1620]: time="2025-10-28T13:09:08.395325705Z" level=info msg="TearDown network for sandbox \"280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad\" successfully" Oct 28 13:09:08.395516 containerd[1620]: time="2025-10-28T13:09:08.395387994Z" level=info msg="StopPodSandbox for \"280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad\" returns successfully" Oct 28 13:09:08.396850 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad-shm.mount: Deactivated successfully. Oct 28 13:09:08.400623 containerd[1620]: time="2025-10-28T13:09:08.399904126Z" level=info msg="received exit event sandbox_id:\"280b77e1babea74bbac916ebf1fef342ded3bfb352f38f5095a6c271211baaad\" exit_status:137 exited_at:{seconds:1761656948 nanos:310300066}" Oct 28 13:09:08.415625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6-rootfs.mount: Deactivated successfully. Oct 28 13:09:08.417584 containerd[1620]: time="2025-10-28T13:09:08.417545685Z" level=info msg="shim disconnected" id=6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6 namespace=k8s.io Oct 28 13:09:08.417584 containerd[1620]: time="2025-10-28T13:09:08.417582846Z" level=info msg="cleaning up after shim disconnected" id=6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6 namespace=k8s.io Oct 28 13:09:08.417738 containerd[1620]: time="2025-10-28T13:09:08.417596102Z" level=info msg="cleaning up dead shim" id=6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6 namespace=k8s.io Oct 28 13:09:08.431271 containerd[1620]: time="2025-10-28T13:09:08.431207179Z" level=info msg="received exit event sandbox_id:\"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" exit_status:137 exited_at:{seconds:1761656948 nanos:383324445}" Oct 28 13:09:08.431570 containerd[1620]: time="2025-10-28T13:09:08.431540879Z" level=info msg="TearDown network for sandbox \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" successfully" Oct 28 13:09:08.431601 containerd[1620]: time="2025-10-28T13:09:08.431581838Z" level=info msg="StopPodSandbox for \"6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6\" returns successfully" Oct 28 13:09:08.440998 kubelet[2789]: I1028 13:09:08.440961 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55d41802-eff1-4994-a239-a9e0d26953df-cilium-config-path\") pod \"55d41802-eff1-4994-a239-a9e0d26953df\" (UID: \"55d41802-eff1-4994-a239-a9e0d26953df\") " Oct 28 13:09:08.441801 kubelet[2789]: I1028 13:09:08.441012 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qddgf\" (UniqueName: \"kubernetes.io/projected/55d41802-eff1-4994-a239-a9e0d26953df-kube-api-access-qddgf\") pod \"55d41802-eff1-4994-a239-a9e0d26953df\" (UID: \"55d41802-eff1-4994-a239-a9e0d26953df\") " Oct 28 13:09:08.445498 kubelet[2789]: I1028 13:09:08.445076 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55d41802-eff1-4994-a239-a9e0d26953df-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "55d41802-eff1-4994-a239-a9e0d26953df" (UID: "55d41802-eff1-4994-a239-a9e0d26953df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 28 13:09:08.447942 kubelet[2789]: I1028 13:09:08.447904 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d41802-eff1-4994-a239-a9e0d26953df-kube-api-access-qddgf" (OuterVolumeSpecName: "kube-api-access-qddgf") pod "55d41802-eff1-4994-a239-a9e0d26953df" (UID: "55d41802-eff1-4994-a239-a9e0d26953df"). InnerVolumeSpecName "kube-api-access-qddgf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 28 13:09:08.542199 kubelet[2789]: I1028 13:09:08.542153 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppw6d\" (UniqueName: \"kubernetes.io/projected/d4897ced-5a6a-4744-b3de-17f0816f0e4a-kube-api-access-ppw6d\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.542199 kubelet[2789]: I1028 13:09:08.542201 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-lib-modules\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.542371 kubelet[2789]: I1028 13:09:08.542221 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-host-proc-sys-kernel\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.542371 kubelet[2789]: I1028 13:09:08.542246 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4897ced-5a6a-4744-b3de-17f0816f0e4a-clustermesh-secrets\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.542371 kubelet[2789]: I1028 13:09:08.542268 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cilium-config-path\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.542371 kubelet[2789]: I1028 13:09:08.542304 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 28 13:09:08.542371 kubelet[2789]: I1028 13:09:08.542286 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-xtables-lock\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.542371 kubelet[2789]: I1028 13:09:08.542353 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cilium-cgroup\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.542568 kubelet[2789]: I1028 13:09:08.542383 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 28 13:09:08.542568 kubelet[2789]: I1028 13:09:08.542409 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 28 13:09:08.543585 kubelet[2789]: I1028 13:09:08.542959 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 28 13:09:08.543585 kubelet[2789]: I1028 13:09:08.543024 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-etc-cni-netd\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.543585 kubelet[2789]: I1028 13:09:08.543050 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-bpf-maps\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.543585 kubelet[2789]: I1028 13:09:08.543071 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-hostproc\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.543585 kubelet[2789]: I1028 13:09:08.543090 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cilium-run\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.543585 kubelet[2789]: I1028 13:09:08.543106 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-host-proc-sys-net\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.543853 kubelet[2789]: I1028 13:09:08.543130 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4897ced-5a6a-4744-b3de-17f0816f0e4a-hubble-tls\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.543853 kubelet[2789]: I1028 13:09:08.543149 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cni-path\") pod \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\" (UID: \"d4897ced-5a6a-4744-b3de-17f0816f0e4a\") " Oct 28 13:09:08.543853 kubelet[2789]: I1028 13:09:08.543191 2789 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.543853 kubelet[2789]: I1028 13:09:08.543220 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.543853 kubelet[2789]: I1028 13:09:08.543232 2789 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qddgf\" (UniqueName: \"kubernetes.io/projected/55d41802-eff1-4994-a239-a9e0d26953df-kube-api-access-qddgf\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.543853 kubelet[2789]: I1028 13:09:08.543243 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55d41802-eff1-4994-a239-a9e0d26953df-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.543853 kubelet[2789]: I1028 13:09:08.543250 2789 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.543853 kubelet[2789]: I1028 13:09:08.543258 2789 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.544088 kubelet[2789]: I1028 13:09:08.543277 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cni-path" (OuterVolumeSpecName: "cni-path") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 28 13:09:08.544088 kubelet[2789]: I1028 13:09:08.543292 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 28 13:09:08.544088 kubelet[2789]: I1028 13:09:08.543306 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 28 13:09:08.544088 kubelet[2789]: I1028 13:09:08.543322 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-hostproc" (OuterVolumeSpecName: "hostproc") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 28 13:09:08.544088 kubelet[2789]: I1028 13:09:08.543337 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 28 13:09:08.544245 kubelet[2789]: I1028 13:09:08.543350 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 28 13:09:08.545483 kubelet[2789]: I1028 13:09:08.545456 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 28 13:09:08.547038 kubelet[2789]: I1028 13:09:08.547011 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4897ced-5a6a-4744-b3de-17f0816f0e4a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 28 13:09:08.547165 kubelet[2789]: I1028 13:09:08.547085 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4897ced-5a6a-4744-b3de-17f0816f0e4a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 28 13:09:08.547206 kubelet[2789]: I1028 13:09:08.547152 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4897ced-5a6a-4744-b3de-17f0816f0e4a-kube-api-access-ppw6d" (OuterVolumeSpecName: "kube-api-access-ppw6d") pod "d4897ced-5a6a-4744-b3de-17f0816f0e4a" (UID: "d4897ced-5a6a-4744-b3de-17f0816f0e4a"). InnerVolumeSpecName "kube-api-access-ppw6d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 28 13:09:08.644278 kubelet[2789]: I1028 13:09:08.644226 2789 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.644278 kubelet[2789]: I1028 13:09:08.644271 2789 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ppw6d\" (UniqueName: \"kubernetes.io/projected/d4897ced-5a6a-4744-b3de-17f0816f0e4a-kube-api-access-ppw6d\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.644278 kubelet[2789]: I1028 13:09:08.644287 2789 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4897ced-5a6a-4744-b3de-17f0816f0e4a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.644500 kubelet[2789]: I1028 13:09:08.644297 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.644500 kubelet[2789]: I1028 13:09:08.644310 2789 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.644500 kubelet[2789]: I1028 13:09:08.644323 2789 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.644500 kubelet[2789]: I1028 13:09:08.644332 2789 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.644500 kubelet[2789]: I1028 13:09:08.644342 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.644500 kubelet[2789]: I1028 13:09:08.644352 2789 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4897ced-5a6a-4744-b3de-17f0816f0e4a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:08.644500 kubelet[2789]: I1028 13:09:08.644361 2789 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4897ced-5a6a-4744-b3de-17f0816f0e4a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 28 13:09:09.106773 kubelet[2789]: I1028 13:09:09.106741 2789 scope.go:117] "RemoveContainer" containerID="5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4" Oct 28 13:09:09.109002 containerd[1620]: time="2025-10-28T13:09:09.108968895Z" level=info msg="RemoveContainer for \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\"" Oct 28 13:09:09.114250 systemd[1]: Removed slice kubepods-besteffort-pod55d41802_eff1_4994_a239_a9e0d26953df.slice - libcontainer container kubepods-besteffort-pod55d41802_eff1_4994_a239_a9e0d26953df.slice. Oct 28 13:09:09.116299 containerd[1620]: time="2025-10-28T13:09:09.116251026Z" level=info msg="RemoveContainer for \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\" returns successfully" Oct 28 13:09:09.116934 kubelet[2789]: I1028 13:09:09.116910 2789 scope.go:117] "RemoveContainer" containerID="5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4" Oct 28 13:09:09.117388 containerd[1620]: time="2025-10-28T13:09:09.117320867Z" level=error msg="ContainerStatus for \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\": not found" Oct 28 13:09:09.119835 systemd[1]: Removed slice kubepods-burstable-podd4897ced_5a6a_4744_b3de_17f0816f0e4a.slice - libcontainer container kubepods-burstable-podd4897ced_5a6a_4744_b3de_17f0816f0e4a.slice. Oct 28 13:09:09.120320 kubelet[2789]: E1028 13:09:09.120290 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\": not found" containerID="5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4" Oct 28 13:09:09.120383 kubelet[2789]: I1028 13:09:09.120337 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4"} err="failed to get container status \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\": rpc error: code = NotFound desc = an error occurred when try to find container \"5abf3b37b7386ef64bae0b04461fca1d7723faa6e3f65722b3f1ddfea1478ae4\": not found" Oct 28 13:09:09.120383 kubelet[2789]: I1028 13:09:09.120373 2789 scope.go:117] "RemoveContainer" containerID="654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00" Oct 28 13:09:09.120546 systemd[1]: kubepods-burstable-podd4897ced_5a6a_4744_b3de_17f0816f0e4a.slice: Consumed 6.716s CPU time, 124.7M memory peak, 368K read from disk, 13.3M written to disk. Oct 28 13:09:09.122804 containerd[1620]: time="2025-10-28T13:09:09.122288881Z" level=info msg="RemoveContainer for \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\"" Oct 28 13:09:09.126997 containerd[1620]: time="2025-10-28T13:09:09.126964415Z" level=info msg="RemoveContainer for \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\" returns successfully" Oct 28 13:09:09.127163 kubelet[2789]: I1028 13:09:09.127136 2789 scope.go:117] "RemoveContainer" containerID="80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444" Oct 28 13:09:09.128396 containerd[1620]: time="2025-10-28T13:09:09.128366744Z" level=info msg="RemoveContainer for \"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\"" Oct 28 13:09:09.133187 containerd[1620]: time="2025-10-28T13:09:09.133145906Z" level=info msg="RemoveContainer for \"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\" returns successfully" Oct 28 13:09:09.133363 kubelet[2789]: I1028 13:09:09.133315 2789 scope.go:117] "RemoveContainer" containerID="e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13" Oct 28 13:09:09.135858 containerd[1620]: time="2025-10-28T13:09:09.135824160Z" level=info msg="RemoveContainer for \"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\"" Oct 28 13:09:09.141235 containerd[1620]: time="2025-10-28T13:09:09.141196640Z" level=info msg="RemoveContainer for \"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\" returns successfully" Oct 28 13:09:09.141424 kubelet[2789]: I1028 13:09:09.141399 2789 scope.go:117] "RemoveContainer" containerID="3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23" Oct 28 13:09:09.144129 containerd[1620]: time="2025-10-28T13:09:09.144089847Z" level=info msg="RemoveContainer for \"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\"" Oct 28 13:09:09.149219 containerd[1620]: time="2025-10-28T13:09:09.149177661Z" level=info msg="RemoveContainer for \"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\" returns successfully" Oct 28 13:09:09.149400 kubelet[2789]: I1028 13:09:09.149369 2789 scope.go:117] "RemoveContainer" containerID="6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110" Oct 28 13:09:09.150803 containerd[1620]: time="2025-10-28T13:09:09.150481089Z" level=info msg="RemoveContainer for \"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\"" Oct 28 13:09:09.154274 containerd[1620]: time="2025-10-28T13:09:09.154242210Z" level=info msg="RemoveContainer for \"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\" returns successfully" Oct 28 13:09:09.154388 kubelet[2789]: I1028 13:09:09.154364 2789 scope.go:117] "RemoveContainer" containerID="654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00" Oct 28 13:09:09.154575 containerd[1620]: time="2025-10-28T13:09:09.154540341Z" level=error msg="ContainerStatus for \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\": not found" Oct 28 13:09:09.154704 kubelet[2789]: E1028 13:09:09.154679 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\": not found" containerID="654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00" Oct 28 13:09:09.154762 kubelet[2789]: I1028 13:09:09.154706 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00"} err="failed to get container status \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\": rpc error: code = NotFound desc = an error occurred when try to find container \"654b34d45d06d94ec4acab61e9f2a8924fd124e423f63176cd05cad87a909a00\": not found" Oct 28 13:09:09.154762 kubelet[2789]: I1028 13:09:09.154724 2789 scope.go:117] "RemoveContainer" containerID="80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444" Oct 28 13:09:09.154933 containerd[1620]: time="2025-10-28T13:09:09.154888599Z" level=error msg="ContainerStatus for \"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\": not found" Oct 28 13:09:09.155027 kubelet[2789]: E1028 13:09:09.155006 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\": not found" containerID="80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444" Oct 28 13:09:09.155065 kubelet[2789]: I1028 13:09:09.155033 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444"} err="failed to get container status \"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\": rpc error: code = NotFound desc = an error occurred when try to find container \"80ea87898e44450877759e8e8ee2b01e3d53af7e5e2169a1a82402abb5afb444\": not found" Oct 28 13:09:09.155065 kubelet[2789]: I1028 13:09:09.155052 2789 scope.go:117] "RemoveContainer" containerID="e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13" Oct 28 13:09:09.155213 containerd[1620]: time="2025-10-28T13:09:09.155186771Z" level=error msg="ContainerStatus for \"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\": not found" Oct 28 13:09:09.155316 kubelet[2789]: E1028 13:09:09.155298 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\": not found" containerID="e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13" Oct 28 13:09:09.155370 kubelet[2789]: I1028 13:09:09.155318 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13"} err="failed to get container status \"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0ce7ab7c0f38213c858a93536bdcb0729c99faf4c711d1e74ab02ff82798b13\": not found" Oct 28 13:09:09.155370 kubelet[2789]: I1028 13:09:09.155333 2789 scope.go:117] "RemoveContainer" containerID="3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23" Oct 28 13:09:09.155511 containerd[1620]: time="2025-10-28T13:09:09.155479311Z" level=error msg="ContainerStatus for \"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\": not found" Oct 28 13:09:09.155591 kubelet[2789]: E1028 13:09:09.155575 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\": not found" containerID="3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23" Oct 28 13:09:09.155629 kubelet[2789]: I1028 13:09:09.155591 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23"} err="failed to get container status \"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\": rpc error: code = NotFound desc = an error occurred when try to find container \"3df9870274f34f73fd5b3703df62a4377d9e02c2acb8752a0807443ff45fcb23\": not found" Oct 28 13:09:09.155629 kubelet[2789]: I1028 13:09:09.155601 2789 scope.go:117] "RemoveContainer" containerID="6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110" Oct 28 13:09:09.155749 containerd[1620]: time="2025-10-28T13:09:09.155725674Z" level=error msg="ContainerStatus for \"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\": not found" Oct 28 13:09:09.155863 kubelet[2789]: E1028 13:09:09.155842 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\": not found" containerID="6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110" Oct 28 13:09:09.155901 kubelet[2789]: I1028 13:09:09.155866 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110"} err="failed to get container status \"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d83f8566aa2658167241a6121205e4941cd32470805d1e228192f5df1a45110\": not found" Oct 28 13:09:09.280722 systemd[1]: var-lib-kubelet-pods-55d41802\x2deff1\x2d4994\x2da239\x2da9e0d26953df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqddgf.mount: Deactivated successfully. Oct 28 13:09:09.280877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d5279b39555c0f404db5f9b9ced54359978475e5197717fe09e5e36909ae0c6-shm.mount: Deactivated successfully. Oct 28 13:09:09.280960 systemd[1]: var-lib-kubelet-pods-d4897ced\x2d5a6a\x2d4744\x2db3de\x2d17f0816f0e4a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dppw6d.mount: Deactivated successfully. Oct 28 13:09:09.281034 systemd[1]: var-lib-kubelet-pods-d4897ced\x2d5a6a\x2d4744\x2db3de\x2d17f0816f0e4a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 28 13:09:09.281107 systemd[1]: var-lib-kubelet-pods-d4897ced\x2d5a6a\x2d4744\x2db3de\x2d17f0816f0e4a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 28 13:09:09.857320 kubelet[2789]: I1028 13:09:09.857260 2789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d41802-eff1-4994-a239-a9e0d26953df" path="/var/lib/kubelet/pods/55d41802-eff1-4994-a239-a9e0d26953df/volumes" Oct 28 13:09:09.857876 kubelet[2789]: I1028 13:09:09.857858 2789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4897ced-5a6a-4744-b3de-17f0816f0e4a" path="/var/lib/kubelet/pods/d4897ced-5a6a-4744-b3de-17f0816f0e4a/volumes" Oct 28 13:09:09.907360 kubelet[2789]: E1028 13:09:09.907306 2789 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 28 13:09:10.179250 sshd[4425]: Connection closed by 10.0.0.1 port 45656 Oct 28 13:09:10.179633 sshd-session[4422]: pam_unix(sshd:session): session closed for user core Oct 28 13:09:10.189487 systemd[1]: sshd@23-10.0.0.28:22-10.0.0.1:45656.service: Deactivated successfully. Oct 28 13:09:10.191279 systemd[1]: session-24.scope: Deactivated successfully. Oct 28 13:09:10.192125 systemd-logind[1598]: Session 24 logged out. Waiting for processes to exit. Oct 28 13:09:10.194689 systemd[1]: Started sshd@24-10.0.0.28:22-10.0.0.1:45662.service - OpenSSH per-connection server daemon (10.0.0.1:45662). Oct 28 13:09:10.195424 systemd-logind[1598]: Removed session 24. Oct 28 13:09:10.255936 sshd[4586]: Accepted publickey for core from 10.0.0.1 port 45662 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:09:10.257279 sshd-session[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:09:10.262150 systemd-logind[1598]: New session 25 of user core. Oct 28 13:09:10.270903 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 28 13:09:10.867613 sshd[4589]: Connection closed by 10.0.0.1 port 45662 Oct 28 13:09:10.868138 sshd-session[4586]: pam_unix(sshd:session): session closed for user core Oct 28 13:09:10.881754 systemd[1]: sshd@24-10.0.0.28:22-10.0.0.1:45662.service: Deactivated successfully. Oct 28 13:09:10.888449 systemd[1]: session-25.scope: Deactivated successfully. Oct 28 13:09:10.891939 systemd-logind[1598]: Session 25 logged out. Waiting for processes to exit. Oct 28 13:09:10.896116 systemd[1]: Started sshd@25-10.0.0.28:22-10.0.0.1:45678.service - OpenSSH per-connection server daemon (10.0.0.1:45678). Oct 28 13:09:10.899191 systemd-logind[1598]: Removed session 25. Oct 28 13:09:10.910358 systemd[1]: Created slice kubepods-burstable-pod515c8a1c_9fc9_48ac_913e_0a83c9620a8f.slice - libcontainer container kubepods-burstable-pod515c8a1c_9fc9_48ac_913e_0a83c9620a8f.slice. Oct 28 13:09:10.951382 sshd[4601]: Accepted publickey for core from 10.0.0.1 port 45678 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:09:10.952902 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:09:10.955771 kubelet[2789]: I1028 13:09:10.955739 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-cilium-cgroup\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956210 kubelet[2789]: I1028 13:09:10.956180 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-xtables-lock\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956270 kubelet[2789]: I1028 13:09:10.956212 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8hjl\" (UniqueName: \"kubernetes.io/projected/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-kube-api-access-s8hjl\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956270 kubelet[2789]: I1028 13:09:10.956231 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-hostproc\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956270 kubelet[2789]: I1028 13:09:10.956249 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-host-proc-sys-kernel\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956270 kubelet[2789]: I1028 13:09:10.956264 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-bpf-maps\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956353 kubelet[2789]: I1028 13:09:10.956278 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-hubble-tls\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956353 kubelet[2789]: I1028 13:09:10.956293 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-cni-path\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956353 kubelet[2789]: I1028 13:09:10.956307 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-etc-cni-netd\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956353 kubelet[2789]: I1028 13:09:10.956323 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-cilium-ipsec-secrets\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956353 kubelet[2789]: I1028 13:09:10.956338 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-host-proc-sys-net\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956454 kubelet[2789]: I1028 13:09:10.956412 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-cilium-run\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956481 kubelet[2789]: I1028 13:09:10.956464 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-lib-modules\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956509 kubelet[2789]: I1028 13:09:10.956490 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-clustermesh-secrets\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.956534 kubelet[2789]: I1028 13:09:10.956518 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/515c8a1c-9fc9-48ac-913e-0a83c9620a8f-cilium-config-path\") pod \"cilium-qsks9\" (UID: \"515c8a1c-9fc9-48ac-913e-0a83c9620a8f\") " pod="kube-system/cilium-qsks9" Oct 28 13:09:10.957545 systemd-logind[1598]: New session 26 of user core. Oct 28 13:09:10.963927 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 28 13:09:11.014881 sshd[4604]: Connection closed by 10.0.0.1 port 45678 Oct 28 13:09:11.015237 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Oct 28 13:09:11.026627 systemd[1]: sshd@25-10.0.0.28:22-10.0.0.1:45678.service: Deactivated successfully. Oct 28 13:09:11.028418 systemd[1]: session-26.scope: Deactivated successfully. Oct 28 13:09:11.029153 systemd-logind[1598]: Session 26 logged out. Waiting for processes to exit. Oct 28 13:09:11.031709 systemd[1]: Started sshd@26-10.0.0.28:22-10.0.0.1:45680.service - OpenSSH per-connection server daemon (10.0.0.1:45680). Oct 28 13:09:11.032381 systemd-logind[1598]: Removed session 26. Oct 28 13:09:11.088035 sshd[4611]: Accepted publickey for core from 10.0.0.1 port 45680 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:09:11.089532 sshd-session[4611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:09:11.093670 systemd-logind[1598]: New session 27 of user core. Oct 28 13:09:11.106908 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 28 13:09:11.231649 kubelet[2789]: E1028 13:09:11.231509 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:11.232160 containerd[1620]: time="2025-10-28T13:09:11.232109208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qsks9,Uid:515c8a1c-9fc9-48ac-913e-0a83c9620a8f,Namespace:kube-system,Attempt:0,}" Oct 28 13:09:11.249610 containerd[1620]: time="2025-10-28T13:09:11.249208829Z" level=info msg="connecting to shim 540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97" address="unix:///run/containerd/s/0bba7d6d12e96043cf7548fab53e30fbd87f8396c945b80f560f9a39b80fc76d" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:09:11.276936 systemd[1]: Started cri-containerd-540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97.scope - libcontainer container 540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97. Oct 28 13:09:11.306529 containerd[1620]: time="2025-10-28T13:09:11.306469336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qsks9,Uid:515c8a1c-9fc9-48ac-913e-0a83c9620a8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97\"" Oct 28 13:09:11.308457 kubelet[2789]: E1028 13:09:11.308415 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:11.313531 containerd[1620]: time="2025-10-28T13:09:11.313482859Z" level=info msg="CreateContainer within sandbox \"540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 28 13:09:11.321689 containerd[1620]: time="2025-10-28T13:09:11.321635583Z" level=info msg="Container b0bd7fe08a17b1fe163f65c7261fe91a23489a2c3a9b98d404fc7c807115e3e7: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:09:11.328621 containerd[1620]: time="2025-10-28T13:09:11.328570225Z" level=info msg="CreateContainer within sandbox \"540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b0bd7fe08a17b1fe163f65c7261fe91a23489a2c3a9b98d404fc7c807115e3e7\"" Oct 28 13:09:11.329163 containerd[1620]: time="2025-10-28T13:09:11.329138073Z" level=info msg="StartContainer for \"b0bd7fe08a17b1fe163f65c7261fe91a23489a2c3a9b98d404fc7c807115e3e7\"" Oct 28 13:09:11.330446 containerd[1620]: time="2025-10-28T13:09:11.330321950Z" level=info msg="connecting to shim b0bd7fe08a17b1fe163f65c7261fe91a23489a2c3a9b98d404fc7c807115e3e7" address="unix:///run/containerd/s/0bba7d6d12e96043cf7548fab53e30fbd87f8396c945b80f560f9a39b80fc76d" protocol=ttrpc version=3 Oct 28 13:09:11.352963 systemd[1]: Started cri-containerd-b0bd7fe08a17b1fe163f65c7261fe91a23489a2c3a9b98d404fc7c807115e3e7.scope - libcontainer container b0bd7fe08a17b1fe163f65c7261fe91a23489a2c3a9b98d404fc7c807115e3e7. Oct 28 13:09:11.384607 containerd[1620]: time="2025-10-28T13:09:11.384563977Z" level=info msg="StartContainer for \"b0bd7fe08a17b1fe163f65c7261fe91a23489a2c3a9b98d404fc7c807115e3e7\" returns successfully" Oct 28 13:09:11.395447 systemd[1]: cri-containerd-b0bd7fe08a17b1fe163f65c7261fe91a23489a2c3a9b98d404fc7c807115e3e7.scope: Deactivated successfully. Oct 28 13:09:11.396813 containerd[1620]: time="2025-10-28T13:09:11.396733245Z" level=info msg="received exit event container_id:\"b0bd7fe08a17b1fe163f65c7261fe91a23489a2c3a9b98d404fc7c807115e3e7\" id:\"b0bd7fe08a17b1fe163f65c7261fe91a23489a2c3a9b98d404fc7c807115e3e7\" pid:4684 exited_at:{seconds:1761656951 nanos:396341985}" Oct 28 13:09:11.905663 kubelet[2789]: I1028 13:09:11.905598 2789 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-28T13:09:11Z","lastTransitionTime":"2025-10-28T13:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 28 13:09:12.122325 kubelet[2789]: E1028 13:09:12.122268 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:12.126804 containerd[1620]: time="2025-10-28T13:09:12.126754125Z" level=info msg="CreateContainer within sandbox \"540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 28 13:09:12.134410 containerd[1620]: time="2025-10-28T13:09:12.134347250Z" level=info msg="Container 7355e1af457cba86492acd718018c40d54fa1bc764379ee1968b3e51747de932: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:09:12.142385 containerd[1620]: time="2025-10-28T13:09:12.142325724Z" level=info msg="CreateContainer within sandbox \"540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7355e1af457cba86492acd718018c40d54fa1bc764379ee1968b3e51747de932\"" Oct 28 13:09:12.143110 containerd[1620]: time="2025-10-28T13:09:12.143066041Z" level=info msg="StartContainer for \"7355e1af457cba86492acd718018c40d54fa1bc764379ee1968b3e51747de932\"" Oct 28 13:09:12.144193 containerd[1620]: time="2025-10-28T13:09:12.144150708Z" level=info msg="connecting to shim 7355e1af457cba86492acd718018c40d54fa1bc764379ee1968b3e51747de932" address="unix:///run/containerd/s/0bba7d6d12e96043cf7548fab53e30fbd87f8396c945b80f560f9a39b80fc76d" protocol=ttrpc version=3 Oct 28 13:09:12.161945 systemd[1]: Started cri-containerd-7355e1af457cba86492acd718018c40d54fa1bc764379ee1968b3e51747de932.scope - libcontainer container 7355e1af457cba86492acd718018c40d54fa1bc764379ee1968b3e51747de932. Oct 28 13:09:12.195072 containerd[1620]: time="2025-10-28T13:09:12.195016794Z" level=info msg="StartContainer for \"7355e1af457cba86492acd718018c40d54fa1bc764379ee1968b3e51747de932\" returns successfully" Oct 28 13:09:12.201796 systemd[1]: cri-containerd-7355e1af457cba86492acd718018c40d54fa1bc764379ee1968b3e51747de932.scope: Deactivated successfully. Oct 28 13:09:12.202130 containerd[1620]: time="2025-10-28T13:09:12.202094032Z" level=info msg="received exit event container_id:\"7355e1af457cba86492acd718018c40d54fa1bc764379ee1968b3e51747de932\" id:\"7355e1af457cba86492acd718018c40d54fa1bc764379ee1968b3e51747de932\" pid:4730 exited_at:{seconds:1761656952 nanos:201884851}" Oct 28 13:09:12.222520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7355e1af457cba86492acd718018c40d54fa1bc764379ee1968b3e51747de932-rootfs.mount: Deactivated successfully. Oct 28 13:09:13.126151 kubelet[2789]: E1028 13:09:13.126112 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:13.146681 containerd[1620]: time="2025-10-28T13:09:13.146616797Z" level=info msg="CreateContainer within sandbox \"540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 28 13:09:13.175231 containerd[1620]: time="2025-10-28T13:09:13.175177648Z" level=info msg="Container f05f4a71206cf3f345b9de761ca96493ee955a633cfa467b6548c4d5b9e4abdb: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:09:13.177284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054690786.mount: Deactivated successfully. Oct 28 13:09:13.185132 containerd[1620]: time="2025-10-28T13:09:13.185056942Z" level=info msg="CreateContainer within sandbox \"540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f05f4a71206cf3f345b9de761ca96493ee955a633cfa467b6548c4d5b9e4abdb\"" Oct 28 13:09:13.185744 containerd[1620]: time="2025-10-28T13:09:13.185691145Z" level=info msg="StartContainer for \"f05f4a71206cf3f345b9de761ca96493ee955a633cfa467b6548c4d5b9e4abdb\"" Oct 28 13:09:13.187065 containerd[1620]: time="2025-10-28T13:09:13.187030899Z" level=info msg="connecting to shim f05f4a71206cf3f345b9de761ca96493ee955a633cfa467b6548c4d5b9e4abdb" address="unix:///run/containerd/s/0bba7d6d12e96043cf7548fab53e30fbd87f8396c945b80f560f9a39b80fc76d" protocol=ttrpc version=3 Oct 28 13:09:13.219955 systemd[1]: Started cri-containerd-f05f4a71206cf3f345b9de761ca96493ee955a633cfa467b6548c4d5b9e4abdb.scope - libcontainer container f05f4a71206cf3f345b9de761ca96493ee955a633cfa467b6548c4d5b9e4abdb. Oct 28 13:09:13.260841 containerd[1620]: time="2025-10-28T13:09:13.260795877Z" level=info msg="StartContainer for \"f05f4a71206cf3f345b9de761ca96493ee955a633cfa467b6548c4d5b9e4abdb\" returns successfully" Oct 28 13:09:13.260839 systemd[1]: cri-containerd-f05f4a71206cf3f345b9de761ca96493ee955a633cfa467b6548c4d5b9e4abdb.scope: Deactivated successfully. Oct 28 13:09:13.262234 containerd[1620]: time="2025-10-28T13:09:13.262195105Z" level=info msg="received exit event container_id:\"f05f4a71206cf3f345b9de761ca96493ee955a633cfa467b6548c4d5b9e4abdb\" id:\"f05f4a71206cf3f345b9de761ca96493ee955a633cfa467b6548c4d5b9e4abdb\" pid:4774 exited_at:{seconds:1761656953 nanos:261902374}" Oct 28 13:09:13.285881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f05f4a71206cf3f345b9de761ca96493ee955a633cfa467b6548c4d5b9e4abdb-rootfs.mount: Deactivated successfully. Oct 28 13:09:14.131367 kubelet[2789]: E1028 13:09:14.131326 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:14.173386 containerd[1620]: time="2025-10-28T13:09:14.173327340Z" level=info msg="CreateContainer within sandbox \"540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 28 13:09:14.265068 containerd[1620]: time="2025-10-28T13:09:14.265020257Z" level=info msg="Container 78cb1552b498d2f64b1bd9a92b7466ae02972358445a5bc55f77b1fe87435c0a: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:09:14.271765 containerd[1620]: time="2025-10-28T13:09:14.271699172Z" level=info msg="CreateContainer within sandbox \"540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"78cb1552b498d2f64b1bd9a92b7466ae02972358445a5bc55f77b1fe87435c0a\"" Oct 28 13:09:14.272310 containerd[1620]: time="2025-10-28T13:09:14.272284621Z" level=info msg="StartContainer for \"78cb1552b498d2f64b1bd9a92b7466ae02972358445a5bc55f77b1fe87435c0a\"" Oct 28 13:09:14.273103 containerd[1620]: time="2025-10-28T13:09:14.273083379Z" level=info msg="connecting to shim 78cb1552b498d2f64b1bd9a92b7466ae02972358445a5bc55f77b1fe87435c0a" address="unix:///run/containerd/s/0bba7d6d12e96043cf7548fab53e30fbd87f8396c945b80f560f9a39b80fc76d" protocol=ttrpc version=3 Oct 28 13:09:14.288089 systemd[1]: Started cri-containerd-78cb1552b498d2f64b1bd9a92b7466ae02972358445a5bc55f77b1fe87435c0a.scope - libcontainer container 78cb1552b498d2f64b1bd9a92b7466ae02972358445a5bc55f77b1fe87435c0a. Oct 28 13:09:14.315632 systemd[1]: cri-containerd-78cb1552b498d2f64b1bd9a92b7466ae02972358445a5bc55f77b1fe87435c0a.scope: Deactivated successfully. Oct 28 13:09:14.316954 containerd[1620]: time="2025-10-28T13:09:14.316906317Z" level=info msg="received exit event container_id:\"78cb1552b498d2f64b1bd9a92b7466ae02972358445a5bc55f77b1fe87435c0a\" id:\"78cb1552b498d2f64b1bd9a92b7466ae02972358445a5bc55f77b1fe87435c0a\" pid:4813 exited_at:{seconds:1761656954 nanos:315765976}" Oct 28 13:09:14.324975 containerd[1620]: time="2025-10-28T13:09:14.324935623Z" level=info msg="StartContainer for \"78cb1552b498d2f64b1bd9a92b7466ae02972358445a5bc55f77b1fe87435c0a\" returns successfully" Oct 28 13:09:14.337225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78cb1552b498d2f64b1bd9a92b7466ae02972358445a5bc55f77b1fe87435c0a-rootfs.mount: Deactivated successfully. Oct 28 13:09:14.908843 kubelet[2789]: E1028 13:09:14.908777 2789 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 28 13:09:15.137289 kubelet[2789]: E1028 13:09:15.137251 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:15.141335 containerd[1620]: time="2025-10-28T13:09:15.141287844Z" level=info msg="CreateContainer within sandbox \"540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 28 13:09:15.156322 containerd[1620]: time="2025-10-28T13:09:15.156262043Z" level=info msg="Container 11bf5b16c11721b97ae46c5da65f9c7bd704f6f5b5e45d042ef1d03742179095: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:09:15.163971 containerd[1620]: time="2025-10-28T13:09:15.163885998Z" level=info msg="CreateContainer within sandbox \"540b31586a3a1124d471c7d1e4df011e01befc8c6064a93400e868f8b796ce97\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"11bf5b16c11721b97ae46c5da65f9c7bd704f6f5b5e45d042ef1d03742179095\"" Oct 28 13:09:15.164478 containerd[1620]: time="2025-10-28T13:09:15.164448484Z" level=info msg="StartContainer for \"11bf5b16c11721b97ae46c5da65f9c7bd704f6f5b5e45d042ef1d03742179095\"" Oct 28 13:09:15.165849 containerd[1620]: time="2025-10-28T13:09:15.165631385Z" level=info msg="connecting to shim 11bf5b16c11721b97ae46c5da65f9c7bd704f6f5b5e45d042ef1d03742179095" address="unix:///run/containerd/s/0bba7d6d12e96043cf7548fab53e30fbd87f8396c945b80f560f9a39b80fc76d" protocol=ttrpc version=3 Oct 28 13:09:15.191944 systemd[1]: Started cri-containerd-11bf5b16c11721b97ae46c5da65f9c7bd704f6f5b5e45d042ef1d03742179095.scope - libcontainer container 11bf5b16c11721b97ae46c5da65f9c7bd704f6f5b5e45d042ef1d03742179095. Oct 28 13:09:15.230246 containerd[1620]: time="2025-10-28T13:09:15.230205541Z" level=info msg="StartContainer for \"11bf5b16c11721b97ae46c5da65f9c7bd704f6f5b5e45d042ef1d03742179095\" returns successfully" Oct 28 13:09:15.643808 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Oct 28 13:09:16.143027 kubelet[2789]: E1028 13:09:16.142977 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:16.158333 kubelet[2789]: I1028 13:09:16.158251 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qsks9" podStartSLOduration=6.158236256 podStartE2EDuration="6.158236256s" podCreationTimestamp="2025-10-28 13:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:09:16.157105686 +0000 UTC m=+86.397051627" watchObservedRunningTime="2025-10-28 13:09:16.158236256 +0000 UTC m=+86.398182207" Oct 28 13:09:16.855274 kubelet[2789]: E1028 13:09:16.855205 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:17.232830 kubelet[2789]: E1028 13:09:17.232673 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:18.707641 systemd-networkd[1522]: lxc_health: Link UP Oct 28 13:09:18.708102 systemd-networkd[1522]: lxc_health: Gained carrier Oct 28 13:09:19.233713 kubelet[2789]: E1028 13:09:19.233651 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:19.753040 systemd-networkd[1522]: lxc_health: Gained IPv6LL Oct 28 13:09:20.152195 kubelet[2789]: E1028 13:09:20.152134 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:21.153532 kubelet[2789]: E1028 13:09:21.153483 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:09:23.876003 sshd[4618]: Connection closed by 10.0.0.1 port 45680 Oct 28 13:09:23.876340 sshd-session[4611]: pam_unix(sshd:session): session closed for user core Oct 28 13:09:23.881736 systemd[1]: sshd@26-10.0.0.28:22-10.0.0.1:45680.service: Deactivated successfully. Oct 28 13:09:23.883650 systemd[1]: session-27.scope: Deactivated successfully. Oct 28 13:09:23.884437 systemd-logind[1598]: Session 27 logged out. Waiting for processes to exit. Oct 28 13:09:23.885552 systemd-logind[1598]: Removed session 27.