Sep 8 23:47:21.947948 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:08:00 -00 2025 Sep 8 23:47:21.947973 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:47:21.947985 kernel: BIOS-provided physical RAM map: Sep 8 23:47:21.947992 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 8 23:47:21.947999 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 8 23:47:21.948005 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 8 23:47:21.948013 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 8 23:47:21.948020 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 8 23:47:21.948026 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 8 23:47:21.948036 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 8 23:47:21.948042 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 8 23:47:21.948049 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 8 23:47:21.948059 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 8 23:47:21.948066 kernel: NX (Execute Disable) protection: active Sep 8 23:47:21.948074 kernel: APIC: Static calls initialized Sep 8 23:47:21.948086 kernel: SMBIOS 2.8 present. Sep 8 23:47:21.948093 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 8 23:47:21.948101 kernel: Hypervisor detected: KVM Sep 8 23:47:21.948108 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 8 23:47:21.948115 kernel: kvm-clock: using sched offset of 3951897659 cycles Sep 8 23:47:21.948122 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 8 23:47:21.948130 kernel: tsc: Detected 2794.748 MHz processor Sep 8 23:47:21.948138 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 8 23:47:21.948145 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 8 23:47:21.948153 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 8 23:47:21.948163 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 8 23:47:21.948171 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 8 23:47:21.948178 kernel: Using GB pages for direct mapping Sep 8 23:47:21.948185 kernel: ACPI: Early table checksum verification disabled Sep 8 23:47:21.948193 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 8 23:47:21.948200 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:47:21.948208 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:47:21.948215 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:47:21.948222 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 8 23:47:21.948233 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:47:21.948240 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:47:21.948247 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:47:21.948255 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:47:21.948262 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 8 23:47:21.948270 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 8 23:47:21.948281 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 8 23:47:21.948291 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 8 23:47:21.948299 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 8 23:47:21.948306 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 8 23:47:21.948314 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 8 23:47:21.948323 kernel: No NUMA configuration found Sep 8 23:47:21.948331 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 8 23:47:21.948339 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 8 23:47:21.948349 kernel: Zone ranges: Sep 8 23:47:21.948357 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 8 23:47:21.948364 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 8 23:47:21.948372 kernel: Normal empty Sep 8 23:47:21.948379 kernel: Movable zone start for each node Sep 8 23:47:21.948386 kernel: Early memory node ranges Sep 8 23:47:21.948394 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 8 23:47:21.948401 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 8 23:47:21.948409 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 8 23:47:21.948419 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 8 23:47:21.948429 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 8 23:47:21.948444 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 8 23:47:21.948452 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 8 23:47:21.948459 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 8 23:47:21.948467 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 8 23:47:21.948474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 8 23:47:21.948482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 8 23:47:21.948490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 8 23:47:21.948501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 8 23:47:21.948508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 8 23:47:21.948516 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 8 23:47:21.948523 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 8 23:47:21.948531 kernel: TSC deadline timer available Sep 8 23:47:21.948538 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 8 23:47:21.948546 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 8 23:47:21.948553 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 8 23:47:21.948563 kernel: kvm-guest: setup PV sched yield Sep 8 23:47:21.948571 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 8 23:47:21.948581 kernel: Booting paravirtualized kernel on KVM Sep 8 23:47:21.948589 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 8 23:47:21.948597 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 8 23:47:21.948604 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 8 23:47:21.948612 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 8 23:47:21.948619 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 8 23:47:21.948628 kernel: kvm-guest: PV spinlocks enabled Sep 8 23:47:21.948655 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 8 23:47:21.948684 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:47:21.948714 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:47:21.948731 kernel: random: crng init done Sep 8 23:47:21.948739 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:47:21.948747 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:47:21.948754 kernel: Fallback order for Node 0: 0 Sep 8 23:47:21.948762 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 8 23:47:21.948769 kernel: Policy zone: DMA32 Sep 8 23:47:21.948777 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:47:21.948789 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43504K init, 1572K bss, 138948K reserved, 0K cma-reserved) Sep 8 23:47:21.948796 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:47:21.948804 kernel: ftrace: allocating 37943 entries in 149 pages Sep 8 23:47:21.948811 kernel: ftrace: allocated 149 pages with 4 groups Sep 8 23:47:21.948819 kernel: Dynamic Preempt: voluntary Sep 8 23:47:21.948826 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:47:21.948835 kernel: rcu: RCU event tracing is enabled. Sep 8 23:47:21.948842 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:47:21.948850 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:47:21.948861 kernel: Rude variant of Tasks RCU enabled. Sep 8 23:47:21.948868 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:47:21.948876 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:47:21.948886 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:47:21.948894 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 8 23:47:21.948902 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:47:21.948909 kernel: Console: colour VGA+ 80x25 Sep 8 23:47:21.948917 kernel: printk: console [ttyS0] enabled Sep 8 23:47:21.948924 kernel: ACPI: Core revision 20230628 Sep 8 23:47:21.948935 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 8 23:47:21.948943 kernel: APIC: Switch to symmetric I/O mode setup Sep 8 23:47:21.948950 kernel: x2apic enabled Sep 8 23:47:21.948958 kernel: APIC: Switched APIC routing to: physical x2apic Sep 8 23:47:21.948965 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 8 23:47:21.948973 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 8 23:47:21.948981 kernel: kvm-guest: setup PV IPIs Sep 8 23:47:21.948999 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 8 23:47:21.949007 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 8 23:47:21.949015 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 8 23:47:21.949023 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 8 23:47:21.949031 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 8 23:47:21.949041 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 8 23:47:21.949049 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 8 23:47:21.949057 kernel: Spectre V2 : Mitigation: Retpolines Sep 8 23:47:21.949065 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 8 23:47:21.949073 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 8 23:47:21.949084 kernel: active return thunk: retbleed_return_thunk Sep 8 23:47:21.949094 kernel: RETBleed: Mitigation: untrained return thunk Sep 8 23:47:21.949102 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 8 23:47:21.949110 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 8 23:47:21.949118 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 8 23:47:21.949126 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 8 23:47:21.949134 kernel: active return thunk: srso_return_thunk Sep 8 23:47:21.949142 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 8 23:47:21.949153 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 8 23:47:21.949161 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 8 23:47:21.949169 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 8 23:47:21.949177 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 8 23:47:21.949185 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 8 23:47:21.949192 kernel: Freeing SMP alternatives memory: 32K Sep 8 23:47:21.949200 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:47:21.949208 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 8 23:47:21.949216 kernel: landlock: Up and running. Sep 8 23:47:21.949226 kernel: SELinux: Initializing. Sep 8 23:47:21.949234 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:47:21.949242 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:47:21.949250 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 8 23:47:21.949258 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:47:21.949266 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:47:21.949274 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:47:21.949284 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 8 23:47:21.949292 kernel: ... version: 0 Sep 8 23:47:21.949303 kernel: ... bit width: 48 Sep 8 23:47:21.949311 kernel: ... generic registers: 6 Sep 8 23:47:21.949318 kernel: ... value mask: 0000ffffffffffff Sep 8 23:47:21.949326 kernel: ... max period: 00007fffffffffff Sep 8 23:47:21.949334 kernel: ... fixed-purpose events: 0 Sep 8 23:47:21.949342 kernel: ... event mask: 000000000000003f Sep 8 23:47:21.949350 kernel: signal: max sigframe size: 1776 Sep 8 23:47:21.949357 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:47:21.949365 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:47:21.949376 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:47:21.949384 kernel: smpboot: x86: Booting SMP configuration: Sep 8 23:47:21.949391 kernel: .... node #0, CPUs: #1 #2 #3 Sep 8 23:47:21.949399 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:47:21.949407 kernel: smpboot: Max logical packages: 1 Sep 8 23:47:21.949415 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 8 23:47:21.949422 kernel: devtmpfs: initialized Sep 8 23:47:21.949430 kernel: x86/mm: Memory block size: 128MB Sep 8 23:47:21.949447 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:47:21.949458 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:47:21.949466 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:47:21.949474 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:47:21.949482 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:47:21.949490 kernel: audit: type=2000 audit(1757375241.649:1): state=initialized audit_enabled=0 res=1 Sep 8 23:47:21.949497 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:47:21.949505 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 8 23:47:21.949513 kernel: cpuidle: using governor menu Sep 8 23:47:21.949521 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:47:21.949531 kernel: dca service started, version 1.12.1 Sep 8 23:47:21.949539 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 8 23:47:21.949547 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 8 23:47:21.949555 kernel: PCI: Using configuration type 1 for base access Sep 8 23:47:21.949563 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 8 23:47:21.949571 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:47:21.949581 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:47:21.949589 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:47:21.949597 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:47:21.949607 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:47:21.949615 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:47:21.949623 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:47:21.949631 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:47:21.949651 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 8 23:47:21.949659 kernel: ACPI: Interpreter enabled Sep 8 23:47:21.949666 kernel: ACPI: PM: (supports S0 S3 S5) Sep 8 23:47:21.949674 kernel: ACPI: Using IOAPIC for interrupt routing Sep 8 23:47:21.949682 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 8 23:47:21.949693 kernel: PCI: Using E820 reservations for host bridge windows Sep 8 23:47:21.949701 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 8 23:47:21.949709 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:47:21.949940 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:47:21.950094 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 8 23:47:21.950240 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 8 23:47:21.950254 kernel: PCI host bridge to bus 0000:00 Sep 8 23:47:21.950445 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 8 23:47:21.950605 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 8 23:47:21.950825 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 8 23:47:21.950962 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 8 23:47:21.951109 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 8 23:47:21.951244 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 8 23:47:21.951367 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:47:21.951561 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 8 23:47:21.951739 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 8 23:47:21.951875 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 8 23:47:21.952024 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 8 23:47:21.952164 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 8 23:47:21.952297 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 8 23:47:21.952463 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 8 23:47:21.952620 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 8 23:47:21.952774 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 8 23:47:21.952908 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 8 23:47:21.953063 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 8 23:47:21.953199 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 8 23:47:21.953332 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 8 23:47:21.953484 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 8 23:47:21.953661 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 8 23:47:21.953806 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 8 23:47:21.953941 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 8 23:47:21.954078 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 8 23:47:21.954211 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 8 23:47:21.954371 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 8 23:47:21.954531 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 8 23:47:21.954703 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 8 23:47:21.954841 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 8 23:47:21.954972 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 8 23:47:21.955121 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 8 23:47:21.955254 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 8 23:47:21.955264 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 8 23:47:21.955278 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 8 23:47:21.955286 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 8 23:47:21.955294 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 8 23:47:21.955301 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 8 23:47:21.955310 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 8 23:47:21.955321 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 8 23:47:21.955329 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 8 23:47:21.955337 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 8 23:47:21.955345 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 8 23:47:21.955355 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 8 23:47:21.955363 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 8 23:47:21.955371 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 8 23:47:21.955379 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 8 23:47:21.955387 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 8 23:47:21.955394 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 8 23:47:21.955402 kernel: iommu: Default domain type: Translated Sep 8 23:47:21.955410 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 8 23:47:21.955418 kernel: PCI: Using ACPI for IRQ routing Sep 8 23:47:21.955429 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 8 23:47:21.955444 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 8 23:47:21.955452 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 8 23:47:21.955597 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 8 23:47:21.955749 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 8 23:47:21.955886 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 8 23:47:21.955897 kernel: vgaarb: loaded Sep 8 23:47:21.955906 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 8 23:47:21.955918 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 8 23:47:21.955926 kernel: clocksource: Switched to clocksource kvm-clock Sep 8 23:47:21.955934 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:47:21.955942 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:47:21.955950 kernel: pnp: PnP ACPI init Sep 8 23:47:21.956145 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 8 23:47:21.956158 kernel: pnp: PnP ACPI: found 6 devices Sep 8 23:47:21.956167 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 8 23:47:21.956179 kernel: NET: Registered PF_INET protocol family Sep 8 23:47:21.956187 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:47:21.956195 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:47:21.956203 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:47:21.956211 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:47:21.956219 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:47:21.956227 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:47:21.956235 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:47:21.956243 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:47:21.956254 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:47:21.956261 kernel: NET: Registered PF_XDP protocol family Sep 8 23:47:21.956387 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 8 23:47:21.956526 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 8 23:47:21.956666 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 8 23:47:21.956790 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 8 23:47:21.956911 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 8 23:47:21.957032 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 8 23:47:21.957055 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:47:21.957064 kernel: Initialise system trusted keyrings Sep 8 23:47:21.957072 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:47:21.957080 kernel: Key type asymmetric registered Sep 8 23:47:21.957088 kernel: Asymmetric key parser 'x509' registered Sep 8 23:47:21.957096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 8 23:47:21.957104 kernel: io scheduler mq-deadline registered Sep 8 23:47:21.957112 kernel: io scheduler kyber registered Sep 8 23:47:21.957121 kernel: io scheduler bfq registered Sep 8 23:47:21.957132 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 8 23:47:21.957140 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 8 23:47:21.957148 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 8 23:47:21.957156 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 8 23:47:21.957165 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:47:21.957173 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 8 23:47:21.957181 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 8 23:47:21.957189 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 8 23:47:21.957196 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 8 23:47:21.957342 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 8 23:47:21.957358 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 8 23:47:21.957496 kernel: rtc_cmos 00:04: registered as rtc0 Sep 8 23:47:21.957636 kernel: rtc_cmos 00:04: setting system clock to 2025-09-08T23:47:21 UTC (1757375241) Sep 8 23:47:21.957780 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 8 23:47:21.957791 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 8 23:47:21.957799 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:47:21.957807 kernel: Segment Routing with IPv6 Sep 8 23:47:21.957819 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:47:21.957827 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:47:21.957835 kernel: Key type dns_resolver registered Sep 8 23:47:21.957843 kernel: IPI shorthand broadcast: enabled Sep 8 23:47:21.957851 kernel: sched_clock: Marking stable (664003312, 102188101)->(842434606, -76243193) Sep 8 23:47:21.957859 kernel: registered taskstats version 1 Sep 8 23:47:21.957867 kernel: Loading compiled-in X.509 certificates Sep 8 23:47:21.957875 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: c16a276a56169aed770943c7e14b6e7e5f4f7133' Sep 8 23:47:21.957883 kernel: Key type .fscrypt registered Sep 8 23:47:21.957893 kernel: Key type fscrypt-provisioning registered Sep 8 23:47:21.957902 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:47:21.957910 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:47:21.957917 kernel: ima: No architecture policies found Sep 8 23:47:21.957925 kernel: clk: Disabling unused clocks Sep 8 23:47:21.957933 kernel: Freeing unused kernel image (initmem) memory: 43504K Sep 8 23:47:21.957941 kernel: Write protecting the kernel read-only data: 38912k Sep 8 23:47:21.957949 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 8 23:47:21.957957 kernel: Run /init as init process Sep 8 23:47:21.957968 kernel: with arguments: Sep 8 23:47:21.957976 kernel: /init Sep 8 23:47:21.957983 kernel: with environment: Sep 8 23:47:21.957991 kernel: HOME=/ Sep 8 23:47:21.957999 kernel: TERM=linux Sep 8 23:47:21.958006 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:47:21.958015 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:47:21.958027 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:47:21.958038 systemd[1]: Detected virtualization kvm. Sep 8 23:47:21.958047 systemd[1]: Detected architecture x86-64. Sep 8 23:47:21.958055 systemd[1]: Running in initrd. Sep 8 23:47:21.958063 systemd[1]: No hostname configured, using default hostname. Sep 8 23:47:21.958072 systemd[1]: Hostname set to . Sep 8 23:47:21.958080 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:47:21.958089 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:47:21.958097 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:47:21.958109 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:47:21.958131 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:47:21.958143 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:47:21.958152 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:47:21.958161 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:47:21.958174 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:47:21.958183 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:47:21.958191 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:47:21.958200 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:47:21.958209 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:47:21.958218 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:47:21.958226 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:47:21.958235 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:47:21.958246 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:47:21.958255 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:47:21.958264 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:47:21.958272 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:47:21.958281 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:47:21.958290 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:47:21.958299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:47:21.958307 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:47:21.958316 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:47:21.958327 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:47:21.958336 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:47:21.958345 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:47:21.958354 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:47:21.958362 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:47:21.958371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:47:21.958380 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:47:21.958388 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:47:21.958400 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:47:21.958449 systemd-journald[193]: Collecting audit messages is disabled. Sep 8 23:47:21.958478 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:47:21.958490 systemd-journald[193]: Journal started Sep 8 23:47:21.958518 systemd-journald[193]: Runtime Journal (/run/log/journal/c40d8d77c3ec44699232fe487784c46e) is 6M, max 48.4M, 42.3M free. Sep 8 23:47:21.941291 systemd-modules-load[195]: Inserted module 'overlay' Sep 8 23:47:21.980762 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:47:21.980791 kernel: Bridge firewalling registered Sep 8 23:47:21.980803 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:47:21.970203 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 8 23:47:21.980219 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:47:21.982079 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:47:21.995281 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:47:21.998466 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:47:22.001161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:47:22.002600 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:47:22.005823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:47:22.019744 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:47:22.020075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:47:22.025673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:47:22.031784 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:47:22.034118 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:47:22.037874 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:47:22.055452 dracut-cmdline[233]: dracut-dracut-053 Sep 8 23:47:22.058680 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:47:22.078457 systemd-resolved[229]: Positive Trust Anchors: Sep 8 23:47:22.078472 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:47:22.078503 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:47:22.081224 systemd-resolved[229]: Defaulting to hostname 'linux'. Sep 8 23:47:22.082594 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:47:22.088063 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:47:22.153681 kernel: SCSI subsystem initialized Sep 8 23:47:22.162657 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:47:22.173681 kernel: iscsi: registered transport (tcp) Sep 8 23:47:22.195688 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:47:22.195782 kernel: QLogic iSCSI HBA Driver Sep 8 23:47:22.248770 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:47:22.255850 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:47:22.284547 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:47:22.284635 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:47:22.284664 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 8 23:47:22.329684 kernel: raid6: avx2x4 gen() 24648 MB/s Sep 8 23:47:22.346672 kernel: raid6: avx2x2 gen() 24024 MB/s Sep 8 23:47:22.363728 kernel: raid6: avx2x1 gen() 23375 MB/s Sep 8 23:47:22.363754 kernel: raid6: using algorithm avx2x4 gen() 24648 MB/s Sep 8 23:47:22.381737 kernel: raid6: .... xor() 6777 MB/s, rmw enabled Sep 8 23:47:22.381769 kernel: raid6: using avx2x2 recovery algorithm Sep 8 23:47:22.402668 kernel: xor: automatically using best checksumming function avx Sep 8 23:47:22.561699 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:47:22.577000 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:47:22.587019 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:47:22.603069 systemd-udevd[416]: Using default interface naming scheme 'v255'. Sep 8 23:47:22.609965 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:47:22.618850 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:47:22.635846 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Sep 8 23:47:22.674011 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:47:22.687953 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:47:22.768237 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:47:22.777892 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:47:22.791209 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:47:22.792050 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:47:22.798237 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:47:22.800758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:47:22.808854 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:47:22.825955 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:47:22.830690 kernel: cryptd: max_cpu_qlen set to 1000 Sep 8 23:47:22.830724 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 8 23:47:22.834058 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:47:22.839271 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:47:22.839286 kernel: GPT:9289727 != 19775487 Sep 8 23:47:22.839297 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:47:22.839314 kernel: GPT:9289727 != 19775487 Sep 8 23:47:22.841407 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:47:22.841447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:47:22.841462 kernel: AVX2 version of gcm_enc/dec engaged. Sep 8 23:47:22.841663 kernel: AES CTR mode by8 optimization enabled Sep 8 23:47:22.864747 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:47:22.865021 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:47:22.868861 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:47:22.873420 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:47:22.875710 kernel: libata version 3.00 loaded. Sep 8 23:47:22.874828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:47:22.880959 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (484) Sep 8 23:47:22.877103 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:47:22.910686 kernel: BTRFS: device fsid 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (470) Sep 8 23:47:22.923132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:47:22.928759 kernel: ahci 0000:00:1f.2: version 3.0 Sep 8 23:47:22.930696 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 8 23:47:22.933175 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 8 23:47:22.933419 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 8 23:47:22.938665 kernel: scsi host0: ahci Sep 8 23:47:22.939659 kernel: scsi host1: ahci Sep 8 23:47:22.941781 kernel: scsi host2: ahci Sep 8 23:47:22.941970 kernel: scsi host3: ahci Sep 8 23:47:22.942778 kernel: scsi host4: ahci Sep 8 23:47:22.945092 kernel: scsi host5: ahci Sep 8 23:47:22.945309 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 8 23:47:22.945322 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 8 23:47:22.945333 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 8 23:47:22.945344 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 8 23:47:22.945362 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 8 23:47:22.945373 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 8 23:47:22.949855 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:47:22.985551 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:47:22.998525 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:47:23.017354 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:47:23.026829 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:47:23.029321 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:47:23.053777 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:47:23.057128 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:47:23.075152 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:47:23.082827 disk-uuid[569]: Primary Header is updated. Sep 8 23:47:23.082827 disk-uuid[569]: Secondary Entries is updated. Sep 8 23:47:23.082827 disk-uuid[569]: Secondary Header is updated. Sep 8 23:47:23.087658 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:47:23.093664 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:47:23.253679 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 8 23:47:23.256462 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 8 23:47:23.256549 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 8 23:47:23.256564 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 8 23:47:23.257366 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 8 23:47:23.257393 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 8 23:47:23.258660 kernel: ata3.00: applying bridge limits Sep 8 23:47:23.258676 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 8 23:47:23.259675 kernel: ata3.00: configured for UDMA/100 Sep 8 23:47:23.260661 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 8 23:47:23.334079 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 8 23:47:23.334560 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 8 23:47:23.346960 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 8 23:47:24.111692 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:47:24.112011 disk-uuid[579]: The operation has completed successfully. Sep 8 23:47:24.153177 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:47:24.153357 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:47:24.208863 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:47:24.213408 sh[594]: Success Sep 8 23:47:24.225710 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 8 23:47:24.266823 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:47:24.278743 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:47:24.281191 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:47:24.294743 kernel: BTRFS info (device dm-0): first mount of filesystem 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf Sep 8 23:47:24.294794 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:47:24.294811 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 8 23:47:24.295714 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:47:24.296985 kernel: BTRFS info (device dm-0): using free space tree Sep 8 23:47:24.301536 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:47:24.303849 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:47:24.314848 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:47:24.355471 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:47:24.375356 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:47:24.375436 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:47:24.375453 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:47:24.378670 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:47:24.384669 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:47:24.473380 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:47:24.490809 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:47:24.527482 systemd-networkd[770]: lo: Link UP Sep 8 23:47:24.527498 systemd-networkd[770]: lo: Gained carrier Sep 8 23:47:24.531810 systemd-networkd[770]: Enumeration completed Sep 8 23:47:24.532008 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:47:24.534995 systemd[1]: Reached target network.target - Network. Sep 8 23:47:24.536874 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:47:24.536884 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:47:24.586856 systemd-networkd[770]: eth0: Link UP Sep 8 23:47:24.586866 systemd-networkd[770]: eth0: Gained carrier Sep 8 23:47:24.586881 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:47:24.618709 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:47:24.697131 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:47:24.704818 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:47:24.713959 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.19 Sep 8 23:47:24.713974 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Sep 8 23:47:24.789602 ignition[776]: Ignition 2.20.0 Sep 8 23:47:24.789614 ignition[776]: Stage: fetch-offline Sep 8 23:47:24.789677 ignition[776]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:47:24.789688 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:47:24.789795 ignition[776]: parsed url from cmdline: "" Sep 8 23:47:24.789800 ignition[776]: no config URL provided Sep 8 23:47:24.789805 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:47:24.789815 ignition[776]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:47:24.789842 ignition[776]: op(1): [started] loading QEMU firmware config module Sep 8 23:47:24.789848 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:47:24.798181 ignition[776]: op(1): [finished] loading QEMU firmware config module Sep 8 23:47:24.838201 ignition[776]: parsing config with SHA512: 8697998e0163176c0e90baaaeef5b9427a6bb8450774f9bdc16f133ec40e3961becd95e0400040cebf76ce31c88f5c76e2c2d82235c071b23e17fbc41241b886 Sep 8 23:47:24.843764 unknown[776]: fetched base config from "system" Sep 8 23:47:24.843777 unknown[776]: fetched user config from "qemu" Sep 8 23:47:24.844150 ignition[776]: fetch-offline: fetch-offline passed Sep 8 23:47:24.844232 ignition[776]: Ignition finished successfully Sep 8 23:47:24.846926 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:47:24.849399 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:47:24.860842 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:47:24.888299 ignition[784]: Ignition 2.20.0 Sep 8 23:47:24.888314 ignition[784]: Stage: kargs Sep 8 23:47:24.888580 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:47:24.888595 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:47:24.889732 ignition[784]: kargs: kargs passed Sep 8 23:47:24.889804 ignition[784]: Ignition finished successfully Sep 8 23:47:24.893222 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:47:24.904813 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:47:24.922212 ignition[792]: Ignition 2.20.0 Sep 8 23:47:24.922224 ignition[792]: Stage: disks Sep 8 23:47:24.922397 ignition[792]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:47:24.922409 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:47:24.925515 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:47:24.923296 ignition[792]: disks: disks passed Sep 8 23:47:24.927118 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:47:24.923341 ignition[792]: Ignition finished successfully Sep 8 23:47:24.929157 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:47:24.931341 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:47:24.931431 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:47:24.932008 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:47:24.946960 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:47:24.961835 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 8 23:47:24.996117 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:47:25.009836 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:47:25.187692 kernel: EXT4-fs (vda9): mounted filesystem 4436772e-5166-41e3-9cb5-50bbb91cbcf6 r/w with ordered data mode. Quota mode: none. Sep 8 23:47:25.188906 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:47:25.190341 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:47:25.200732 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:47:25.202753 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:47:25.204021 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:47:25.204081 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:47:25.232042 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (811) Sep 8 23:47:25.232092 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:47:25.204117 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:47:25.236955 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:47:25.236974 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:47:25.210606 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:47:25.232931 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:47:25.241704 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:47:25.243335 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:47:25.273053 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:47:25.277121 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:47:25.282135 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:47:25.285923 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:47:25.377924 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:47:25.393766 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:47:25.396425 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:47:25.403942 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:47:25.405392 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:47:25.513903 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:47:25.740676 ignition[928]: INFO : Ignition 2.20.0 Sep 8 23:47:25.740676 ignition[928]: INFO : Stage: mount Sep 8 23:47:25.742846 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:47:25.742846 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:47:25.745795 ignition[928]: INFO : mount: mount passed Sep 8 23:47:25.746675 ignition[928]: INFO : Ignition finished successfully Sep 8 23:47:25.749894 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:47:25.762995 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:47:25.788846 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:47:25.802547 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (938) Sep 8 23:47:25.802624 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:47:25.802658 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:47:25.804054 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:47:25.806677 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:47:25.809377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:47:25.880280 ignition[955]: INFO : Ignition 2.20.0 Sep 8 23:47:25.880280 ignition[955]: INFO : Stage: files Sep 8 23:47:25.897075 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:47:25.897075 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:47:25.897075 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:47:25.897075 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:47:25.897075 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:47:25.904367 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:47:25.904367 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:47:25.904367 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:47:25.904367 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 8 23:47:25.904367 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 8 23:47:25.899808 unknown[955]: wrote ssh authorized keys file for user: core Sep 8 23:47:25.948528 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 8 23:47:26.366937 systemd-networkd[770]: eth0: Gained IPv6LL Sep 8 23:47:26.591185 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 8 23:47:26.591185 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:47:26.591185 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 8 23:47:26.956059 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 8 23:47:27.949949 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 8 23:47:27.952238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 8 23:47:28.459846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 8 23:47:31.511050 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 8 23:47:31.511050 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 8 23:47:31.515476 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:47:31.517983 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:47:31.517983 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 8 23:47:31.517983 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 8 23:47:31.517983 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:47:31.517983 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:47:31.517983 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 8 23:47:31.517983 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:47:31.548842 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:47:31.598622 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:47:31.600377 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:47:31.600377 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 8 23:47:31.600377 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 8 23:47:31.600377 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:47:31.600377 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:47:31.600377 ignition[955]: INFO : files: files passed Sep 8 23:47:31.600377 ignition[955]: INFO : Ignition finished successfully Sep 8 23:47:31.613752 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:47:31.625859 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:47:31.626806 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:47:31.635316 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:47:31.635455 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:47:31.642148 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:47:31.646575 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:47:31.648480 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:47:31.716441 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:47:31.719087 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:47:31.721221 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:47:31.730364 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:47:31.885352 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:47:31.885497 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:47:31.938991 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:47:31.941137 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:47:31.943616 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:47:31.945053 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:47:31.966438 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:47:31.984015 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:47:31.994333 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:47:31.995796 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:47:31.995948 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:47:32.095802 ignition[1010]: INFO : Ignition 2.20.0 Sep 8 23:47:32.095802 ignition[1010]: INFO : Stage: umount Sep 8 23:47:32.095802 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:47:32.095802 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:47:32.095802 ignition[1010]: INFO : umount: umount passed Sep 8 23:47:32.095802 ignition[1010]: INFO : Ignition finished successfully Sep 8 23:47:31.996370 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:47:31.996507 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:47:31.997129 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:47:31.997495 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:47:31.998010 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:47:31.998394 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:47:31.998775 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:47:31.999187 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:47:31.999502 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:47:32.000000 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:47:32.000322 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:47:32.000638 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:47:32.000991 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:47:32.001117 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:47:32.001892 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:47:32.002235 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:47:32.002540 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:47:32.002694 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:47:32.003115 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:47:32.003275 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:47:32.003891 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:47:32.004027 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:47:32.004553 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:47:32.005006 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:47:32.009839 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:47:32.010594 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:47:32.011022 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:47:32.011428 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:47:32.011571 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:47:32.011995 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:47:32.012115 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:47:32.012623 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:47:32.012818 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:47:32.013282 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:47:32.013422 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:47:32.015208 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:47:32.016432 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:47:32.016691 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:47:32.016833 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:47:32.017253 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:47:32.017383 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:47:32.022441 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:47:32.022552 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:47:32.078565 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:47:32.078736 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:47:32.081108 systemd[1]: Stopped target network.target - Network. Sep 8 23:47:32.081407 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:47:32.081478 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:47:32.082096 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:47:32.082166 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:47:32.082989 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:47:32.083058 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:47:32.083403 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:47:32.083469 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:47:32.084011 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:47:32.084586 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:47:32.091579 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:47:32.091771 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:47:32.097143 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:47:32.099555 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:47:32.099663 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:47:32.103182 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:47:32.103337 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:47:32.103973 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:47:32.104120 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:47:32.108630 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:47:32.110222 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:47:32.110373 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:47:32.112924 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:47:32.113006 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:47:32.114803 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:47:32.114880 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:47:32.121764 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:47:32.123575 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:47:32.123678 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:47:32.126044 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:47:32.126112 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:47:32.128365 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:47:32.128428 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:47:32.130625 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:47:32.134256 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:47:32.141661 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:47:32.141896 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:47:32.143872 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:47:32.143991 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:47:32.145807 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:47:32.319688 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 8 23:47:32.145873 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:47:32.148057 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:47:32.148107 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:47:32.150243 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:47:32.150304 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:47:32.152320 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:47:32.152372 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:47:32.154182 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:47:32.154239 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:47:32.168057 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:47:32.169834 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:47:32.169937 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:47:32.173132 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 8 23:47:32.173207 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:47:32.175406 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:47:32.175463 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:47:32.177427 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:47:32.177480 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:47:32.180479 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 8 23:47:32.180548 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:47:32.180992 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:47:32.181109 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:47:32.183663 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:47:32.260976 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:47:32.270633 systemd[1]: Switching root. Sep 8 23:47:32.349776 systemd-journald[193]: Journal stopped Sep 8 23:47:35.088562 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:47:35.088701 kernel: SELinux: policy capability open_perms=1 Sep 8 23:47:35.088730 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:47:35.088789 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:47:35.088812 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:47:35.088836 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:47:35.088854 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:47:35.088869 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:47:35.088883 kernel: audit: type=1403 audit(1757375253.614:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:47:35.088902 systemd[1]: Successfully loaded SELinux policy in 62.548ms. Sep 8 23:47:35.088921 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 22.731ms. Sep 8 23:47:35.088947 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:47:35.088964 systemd[1]: Detected virtualization kvm. Sep 8 23:47:35.088979 systemd[1]: Detected architecture x86-64. Sep 8 23:47:35.088995 systemd[1]: Detected first boot. Sep 8 23:47:35.089011 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:47:35.089027 zram_generator::config[1056]: No configuration found. Sep 8 23:47:35.089053 kernel: Guest personality initialized and is inactive Sep 8 23:47:35.089080 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 8 23:47:35.089096 kernel: Initialized host personality Sep 8 23:47:35.089118 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:47:35.089134 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:47:35.089151 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:47:35.089166 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:47:35.089182 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:47:35.089198 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:47:35.089215 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:47:35.089236 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:47:35.089252 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:47:35.089278 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:47:35.089294 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:47:35.089310 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:47:35.089326 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:47:35.089343 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:47:35.089359 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:47:35.089375 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:47:35.089392 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:47:35.089411 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:47:35.089427 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:47:35.089443 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:47:35.089459 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 8 23:47:35.089479 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:47:35.089495 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:47:35.089510 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:47:35.089526 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:47:35.089545 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:47:35.089561 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:47:35.089578 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:47:35.089598 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:47:35.089614 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:47:35.089630 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:47:35.089663 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:47:35.089680 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:47:35.089696 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:47:35.089718 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:47:35.089734 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:47:35.089807 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:47:35.089835 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:47:35.089851 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:47:35.089867 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:47:35.089883 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:47:35.089899 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:47:35.089914 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:47:35.089936 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:47:35.089953 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:47:35.089969 systemd[1]: Reached target machines.target - Containers. Sep 8 23:47:35.089985 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:47:35.090001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:47:35.090018 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:47:35.090038 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:47:35.090054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:47:35.090085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:47:35.090103 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:47:35.090119 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:47:35.090136 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:47:35.090153 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:47:35.090174 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:47:35.090191 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:47:35.090220 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:47:35.090255 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:47:35.090289 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:47:35.090306 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:47:35.090323 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:47:35.090376 systemd-journald[1120]: Collecting audit messages is disabled. Sep 8 23:47:35.090423 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:47:35.090450 systemd-journald[1120]: Journal started Sep 8 23:47:35.090485 systemd-journald[1120]: Runtime Journal (/run/log/journal/c40d8d77c3ec44699232fe487784c46e) is 6M, max 48.4M, 42.3M free. Sep 8 23:47:34.670008 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:47:34.690868 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:47:34.691620 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:47:34.692301 systemd[1]: systemd-journald.service: Consumed 1.207s CPU time. Sep 8 23:47:35.099815 kernel: loop: module loaded Sep 8 23:47:35.117497 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:47:35.137099 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:47:35.154679 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:47:35.154842 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:47:35.154874 systemd[1]: Stopped verity-setup.service. Sep 8 23:47:35.164691 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:47:35.166665 kernel: fuse: init (API version 7.39) Sep 8 23:47:35.178898 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:47:35.180191 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:47:35.181787 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:47:35.184173 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:47:35.185787 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:47:35.191779 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:47:35.193506 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:47:35.195684 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:47:35.198141 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:47:35.198434 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:47:35.200166 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:47:35.200527 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:47:35.202366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:47:35.202657 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:47:35.204424 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:47:35.204736 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:47:35.206477 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:47:35.206925 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:47:35.208691 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:47:35.210409 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:47:35.212233 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:47:35.214143 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:47:35.217688 kernel: ACPI: bus type drm_connector registered Sep 8 23:47:35.218749 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:47:35.218995 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:47:35.233708 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:47:35.246750 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:47:35.249365 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:47:35.250505 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:47:35.250541 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:47:35.252671 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:47:35.255147 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:47:35.258993 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:47:35.260319 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:47:35.262086 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:47:35.267232 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:47:35.309822 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:47:35.312859 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:47:35.332392 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:47:35.336889 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:47:35.342908 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:47:35.349553 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:47:35.356270 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:47:35.361836 systemd-journald[1120]: Time spent on flushing to /var/log/journal/c40d8d77c3ec44699232fe487784c46e is 21.525ms for 973 entries. Sep 8 23:47:35.361836 systemd-journald[1120]: System Journal (/var/log/journal/c40d8d77c3ec44699232fe487784c46e) is 8M, max 195.6M, 187.6M free. Sep 8 23:47:35.404416 systemd-journald[1120]: Received client request to flush runtime journal. Sep 8 23:47:35.404464 kernel: loop0: detected capacity change from 0 to 138176 Sep 8 23:47:35.359194 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:47:35.361241 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:47:35.361690 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:47:35.372189 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:47:35.384841 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:47:35.412934 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:47:35.419531 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 8 23:47:35.430664 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:47:35.434682 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:47:35.461626 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:47:35.526701 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:47:35.527565 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Sep 8 23:47:35.527996 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Sep 8 23:47:35.529912 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 8 23:47:35.532688 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:47:35.544826 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:47:35.560436 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:47:35.570195 kernel: loop1: detected capacity change from 0 to 147912 Sep 8 23:47:35.595822 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:47:35.633502 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:47:35.647667 kernel: loop2: detected capacity change from 0 to 229808 Sep 8 23:47:35.668496 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Sep 8 23:47:35.668922 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Sep 8 23:47:35.674986 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:47:35.693971 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:47:35.712678 kernel: loop3: detected capacity change from 0 to 138176 Sep 8 23:47:35.733885 kernel: loop4: detected capacity change from 0 to 147912 Sep 8 23:47:35.769676 kernel: loop5: detected capacity change from 0 to 229808 Sep 8 23:47:35.779832 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:47:35.781208 (sd-merge)[1203]: Merged extensions into '/usr'. Sep 8 23:47:35.788474 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:47:35.788493 systemd[1]: Reloading... Sep 8 23:47:35.876698 zram_generator::config[1234]: No configuration found. Sep 8 23:47:36.041743 ldconfig[1156]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:47:36.126492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:47:36.198288 systemd[1]: Reloading finished in 409 ms. Sep 8 23:47:36.217700 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:47:36.219756 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:47:36.313046 systemd[1]: Starting ensure-sysext.service... Sep 8 23:47:36.317639 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:47:36.334767 systemd[1]: Reload requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:47:36.334788 systemd[1]: Reloading... Sep 8 23:47:36.358140 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:47:36.358564 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:47:36.363539 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:47:36.364117 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 8 23:47:36.364382 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 8 23:47:36.373670 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:47:36.373868 systemd-tmpfiles[1269]: Skipping /boot Sep 8 23:47:36.424677 zram_generator::config[1299]: No configuration found. Sep 8 23:47:36.433789 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:47:36.433806 systemd-tmpfiles[1269]: Skipping /boot Sep 8 23:47:36.546252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:47:36.617478 systemd[1]: Reloading finished in 282 ms. Sep 8 23:47:36.633964 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:47:36.661178 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:47:36.673700 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:47:36.677318 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:47:36.681332 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:47:36.686781 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:47:36.692220 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:47:36.698808 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:47:36.704679 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:47:36.704867 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:47:36.708583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:47:36.713859 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:47:36.720042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:47:36.721460 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:47:36.721636 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:47:36.724558 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:47:36.725776 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:47:36.727441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:47:36.727712 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:47:36.730494 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:47:36.730811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:47:36.733193 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:47:36.733562 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:47:36.743537 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:47:36.750730 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Sep 8 23:47:36.755850 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:47:36.756120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:47:36.767491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:47:36.774814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:47:36.780213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:47:36.781603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:47:36.781867 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:47:36.784133 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:47:36.785856 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:47:36.788971 augenrules[1372]: No rules Sep 8 23:47:36.789083 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:47:36.791404 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:47:36.791722 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:47:36.793854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:47:36.794105 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:47:36.796398 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:47:36.799421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:47:36.799721 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:47:36.802123 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:47:36.804351 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:47:36.811807 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:47:36.814497 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:47:36.822915 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:47:36.844923 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:47:36.851861 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:47:36.853162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:47:36.855430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:47:36.859172 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:47:36.863855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:47:36.866291 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:47:36.868113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:47:36.868152 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:47:36.871798 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:47:36.873306 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:47:36.873440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:47:36.877295 systemd[1]: Finished ensure-sysext.service. Sep 8 23:47:36.879211 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:47:36.879630 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:47:36.881683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:47:36.881993 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:47:36.883974 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:47:36.884213 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:47:36.900757 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1409) Sep 8 23:47:36.904259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:47:36.904540 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:47:36.909294 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:47:36.909379 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:47:36.926411 augenrules[1408]: /sbin/augenrules: No change Sep 8 23:47:36.929166 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:47:36.969999 augenrules[1443]: No rules Sep 8 23:47:36.988562 systemd-resolved[1340]: Positive Trust Anchors: Sep 8 23:47:36.988598 systemd-resolved[1340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:47:36.988632 systemd-resolved[1340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:47:36.988998 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:47:36.990537 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:47:36.992172 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 8 23:47:37.002769 systemd-resolved[1340]: Defaulting to hostname 'linux'. Sep 8 23:47:37.006797 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:47:37.008587 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:47:37.045292 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:47:37.056001 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:47:37.082463 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 8 23:47:37.085410 systemd-networkd[1418]: lo: Link UP Sep 8 23:47:37.085431 systemd-networkd[1418]: lo: Gained carrier Sep 8 23:47:37.087406 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:47:37.089934 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:47:37.090770 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 8 23:47:37.093877 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:47:37.094760 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 8 23:47:37.096899 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 8 23:47:37.099530 systemd-networkd[1418]: Enumeration completed Sep 8 23:47:37.099727 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:47:37.100119 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:47:37.100134 systemd-networkd[1418]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:47:37.101545 systemd[1]: Reached target network.target - Network. Sep 8 23:47:37.102284 systemd-networkd[1418]: eth0: Link UP Sep 8 23:47:37.102297 systemd-networkd[1418]: eth0: Gained carrier Sep 8 23:47:37.102316 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:47:37.103674 kernel: ACPI: button: Power Button [PWRF] Sep 8 23:47:37.110687 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 8 23:47:37.112778 systemd-networkd[1418]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:47:37.112879 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:47:37.114310 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. Sep 8 23:47:37.975815 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:47:37.975882 systemd-timesyncd[1433]: Initial clock synchronization to Mon 2025-09-08 23:47:37.975697 UTC. Sep 8 23:47:37.976580 systemd-resolved[1340]: Clock change detected. Flushing caches. Sep 8 23:47:37.977894 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:47:38.004116 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:47:38.068777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:47:38.075314 kernel: mousedev: PS/2 mouse device common for all mice Sep 8 23:47:38.201825 kernel: kvm_amd: TSC scaling supported Sep 8 23:47:38.201933 kernel: kvm_amd: Nested Virtualization enabled Sep 8 23:47:38.201979 kernel: kvm_amd: Nested Paging enabled Sep 8 23:47:38.201992 kernel: kvm_amd: LBR virtualization supported Sep 8 23:47:38.203829 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 8 23:47:38.203849 kernel: kvm_amd: Virtual GIF supported Sep 8 23:47:38.230336 kernel: EDAC MC: Ver: 3.0.0 Sep 8 23:47:38.278947 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:47:38.287141 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 8 23:47:38.301848 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 8 23:47:38.351492 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:47:38.394137 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 8 23:47:38.400337 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:47:38.401491 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:47:38.402718 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:47:38.403986 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:47:38.405465 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:47:38.406944 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:47:38.408288 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:47:38.409590 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:47:38.409620 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:47:38.410608 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:47:38.412612 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:47:38.415527 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:47:38.419515 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:47:38.421622 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:47:38.422951 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:47:38.427337 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:47:38.477024 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:47:38.480486 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 8 23:47:38.483536 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:47:38.485103 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:47:38.486164 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:47:38.487392 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:47:38.487440 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:47:38.489426 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:47:38.492568 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:47:38.495486 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:47:38.497457 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:47:38.501784 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:47:38.503021 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:47:38.505134 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:47:38.508322 jq[1479]: false Sep 8 23:47:38.512499 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 8 23:47:38.515931 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:47:38.522529 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:47:38.529873 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:47:38.532656 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:47:38.533630 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:47:38.534404 extend-filesystems[1480]: Found loop3 Sep 8 23:47:38.535454 extend-filesystems[1480]: Found loop4 Sep 8 23:47:38.535454 extend-filesystems[1480]: Found loop5 Sep 8 23:47:38.535454 extend-filesystems[1480]: Found sr0 Sep 8 23:47:38.535454 extend-filesystems[1480]: Found vda Sep 8 23:47:38.535454 extend-filesystems[1480]: Found vda1 Sep 8 23:47:38.535454 extend-filesystems[1480]: Found vda2 Sep 8 23:47:38.535454 extend-filesystems[1480]: Found vda3 Sep 8 23:47:38.535454 extend-filesystems[1480]: Found usr Sep 8 23:47:38.535454 extend-filesystems[1480]: Found vda4 Sep 8 23:47:38.535454 extend-filesystems[1480]: Found vda6 Sep 8 23:47:38.535454 extend-filesystems[1480]: Found vda7 Sep 8 23:47:38.555888 extend-filesystems[1480]: Found vda9 Sep 8 23:47:38.555888 extend-filesystems[1480]: Checking size of /dev/vda9 Sep 8 23:47:38.542480 dbus-daemon[1478]: [system] SELinux support is enabled Sep 8 23:47:38.537097 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:47:38.566002 extend-filesystems[1480]: Resized partition /dev/vda9 Sep 8 23:47:38.548862 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:47:38.567824 extend-filesystems[1501]: resize2fs 1.47.1 (20-May-2024) Sep 8 23:47:38.574160 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:47:38.556038 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:47:38.566591 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 8 23:47:38.571432 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:47:38.571788 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:47:38.572183 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:47:38.572962 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:47:38.583367 jq[1496]: true Sep 8 23:47:38.580774 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:47:38.581049 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:47:38.587375 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1409) Sep 8 23:47:38.602158 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:47:38.603022 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:47:38.604508 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:47:38.604538 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:47:38.608361 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:47:38.640706 jq[1504]: true Sep 8 23:47:38.619789 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:47:38.641285 update_engine[1489]: I20250908 23:47:38.614365 1489 main.cc:92] Flatcar Update Engine starting Sep 8 23:47:38.641285 update_engine[1489]: I20250908 23:47:38.623772 1489 update_check_scheduler.cc:74] Next update check in 2m51s Sep 8 23:47:38.641887 extend-filesystems[1501]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:47:38.641887 extend-filesystems[1501]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:47:38.641887 extend-filesystems[1501]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:47:38.644938 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:47:38.645937 extend-filesystems[1480]: Resized filesystem in /dev/vda9 Sep 8 23:47:38.645362 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:47:38.661731 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:47:38.673167 systemd-logind[1487]: Watching system buttons on /dev/input/event1 (Power Button) Sep 8 23:47:38.673237 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 8 23:47:38.673765 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:47:38.674567 systemd-logind[1487]: New seat seat0. Sep 8 23:47:38.685782 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:47:38.689483 tar[1503]: linux-amd64/LICENSE Sep 8 23:47:38.747465 locksmithd[1530]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:47:38.781199 tar[1503]: linux-amd64/helm Sep 8 23:47:38.829430 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:47:38.840361 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:47:38.853406 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:47:38.854597 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:47:38.889201 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:47:38.905762 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:47:38.945796 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:47:38.951021 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:42796.service - OpenSSH per-connection server daemon (10.0.0.1:42796). Sep 8 23:47:38.962482 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:47:38.962885 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:47:38.976746 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:47:39.005067 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:47:39.017019 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:47:39.021282 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 8 23:47:39.022769 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:47:39.149855 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 42796 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:47:39.159016 sshd-session[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:39.170865 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:47:39.186618 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:47:39.223614 systemd-logind[1487]: New session 1 of user core. Sep 8 23:47:39.253838 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:47:39.255813 containerd[1512]: time="2025-09-08T23:47:39.255712336Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 8 23:47:39.264564 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:47:39.271139 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:47:39.274517 systemd-logind[1487]: New session c1 of user core. Sep 8 23:47:39.288449 containerd[1512]: time="2025-09-08T23:47:39.288342440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:47:39.306944 containerd[1512]: time="2025-09-08T23:47:39.306822767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:47:39.306944 containerd[1512]: time="2025-09-08T23:47:39.306897677Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 8 23:47:39.306944 containerd[1512]: time="2025-09-08T23:47:39.306957379Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 8 23:47:39.307383 containerd[1512]: time="2025-09-08T23:47:39.307356638Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 8 23:47:39.307450 containerd[1512]: time="2025-09-08T23:47:39.307422552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 8 23:47:39.307602 containerd[1512]: time="2025-09-08T23:47:39.307562404Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:47:39.307602 containerd[1512]: time="2025-09-08T23:47:39.307589976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:47:39.308043 containerd[1512]: time="2025-09-08T23:47:39.308014422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:47:39.308043 containerd[1512]: time="2025-09-08T23:47:39.308042554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 8 23:47:39.308135 containerd[1512]: time="2025-09-08T23:47:39.308062201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:47:39.308135 containerd[1512]: time="2025-09-08T23:47:39.308090384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 8 23:47:39.308278 containerd[1512]: time="2025-09-08T23:47:39.308244643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:47:39.308714 containerd[1512]: time="2025-09-08T23:47:39.308688275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:47:39.309030 containerd[1512]: time="2025-09-08T23:47:39.308954885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:47:39.309079 containerd[1512]: time="2025-09-08T23:47:39.309025478Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 8 23:47:39.309214 containerd[1512]: time="2025-09-08T23:47:39.309170670Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 8 23:47:39.309380 containerd[1512]: time="2025-09-08T23:47:39.309326282Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:47:39.347421 containerd[1512]: time="2025-09-08T23:47:39.347185151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 8 23:47:39.347421 containerd[1512]: time="2025-09-08T23:47:39.347288364Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 8 23:47:39.347421 containerd[1512]: time="2025-09-08T23:47:39.347351493Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 8 23:47:39.347421 containerd[1512]: time="2025-09-08T23:47:39.347376119Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 8 23:47:39.347421 containerd[1512]: time="2025-09-08T23:47:39.347395185Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 8 23:47:39.348025 containerd[1512]: time="2025-09-08T23:47:39.347688555Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 8 23:47:39.348025 containerd[1512]: time="2025-09-08T23:47:39.348019385Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 8 23:47:39.348219 containerd[1512]: time="2025-09-08T23:47:39.348184074Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 8 23:47:39.348219 containerd[1512]: time="2025-09-08T23:47:39.348222025Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 8 23:47:39.348505 containerd[1512]: time="2025-09-08T23:47:39.348248805Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 8 23:47:39.348505 containerd[1512]: time="2025-09-08T23:47:39.348262892Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 8 23:47:39.348505 containerd[1512]: time="2025-09-08T23:47:39.348283360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 8 23:47:39.348505 containerd[1512]: time="2025-09-08T23:47:39.348455994Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 8 23:47:39.348505 containerd[1512]: time="2025-09-08T23:47:39.348478987Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 8 23:47:39.348505 containerd[1512]: time="2025-09-08T23:47:39.348494025Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 8 23:47:39.348505 containerd[1512]: time="2025-09-08T23:47:39.348509504Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348523200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348536385Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348703759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348719348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348738915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348758391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348771736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348788217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348800951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348815849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348830136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348845635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348859621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.349014 containerd[1512]: time="2025-09-08T23:47:39.348872776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.350361 containerd[1512]: time="2025-09-08T23:47:39.348886030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.350361 containerd[1512]: time="2025-09-08T23:47:39.348906689Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 8 23:47:39.350361 containerd[1512]: time="2025-09-08T23:47:39.348933049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.350361 containerd[1512]: time="2025-09-08T23:47:39.348945853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.350361 containerd[1512]: time="2025-09-08T23:47:39.348961542Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 8 23:47:39.351139 containerd[1512]: time="2025-09-08T23:47:39.351072270Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 8 23:47:39.351139 containerd[1512]: time="2025-09-08T23:47:39.351106284Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 8 23:47:39.351139 containerd[1512]: time="2025-09-08T23:47:39.351117495Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 8 23:47:39.351139 containerd[1512]: time="2025-09-08T23:47:39.351129528Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 8 23:47:39.351139 containerd[1512]: time="2025-09-08T23:47:39.351139446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.351139 containerd[1512]: time="2025-09-08T23:47:39.351152150Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 8 23:47:39.351139 containerd[1512]: time="2025-09-08T23:47:39.351168130Z" level=info msg="NRI interface is disabled by configuration." Sep 8 23:47:39.351139 containerd[1512]: time="2025-09-08T23:47:39.351178950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 8 23:47:39.351682 containerd[1512]: time="2025-09-08T23:47:39.351613365Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 8 23:47:39.351682 containerd[1512]: time="2025-09-08T23:47:39.351677776Z" level=info msg="Connect containerd service" Sep 8 23:47:39.351888 containerd[1512]: time="2025-09-08T23:47:39.351745353Z" level=info msg="using legacy CRI server" Sep 8 23:47:39.351888 containerd[1512]: time="2025-09-08T23:47:39.351754650Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:47:39.351888 containerd[1512]: time="2025-09-08T23:47:39.351869576Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 8 23:47:39.352738 containerd[1512]: time="2025-09-08T23:47:39.352690796Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:47:39.353661 containerd[1512]: time="2025-09-08T23:47:39.352976782Z" level=info msg="Start subscribing containerd event" Sep 8 23:47:39.353661 containerd[1512]: time="2025-09-08T23:47:39.353075497Z" level=info msg="Start recovering state" Sep 8 23:47:39.353661 containerd[1512]: time="2025-09-08T23:47:39.353174022Z" level=info msg="Start event monitor" Sep 8 23:47:39.353661 containerd[1512]: time="2025-09-08T23:47:39.353192697Z" level=info msg="Start snapshots syncer" Sep 8 23:47:39.353661 containerd[1512]: time="2025-09-08T23:47:39.353210090Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:47:39.353661 containerd[1512]: time="2025-09-08T23:47:39.353220629Z" level=info msg="Start streaming server" Sep 8 23:47:39.353661 containerd[1512]: time="2025-09-08T23:47:39.353405316Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:47:39.353661 containerd[1512]: time="2025-09-08T23:47:39.353498711Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:47:39.356984 containerd[1512]: time="2025-09-08T23:47:39.356733417Z" level=info msg="containerd successfully booted in 0.102230s" Sep 8 23:47:39.356861 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:47:39.387648 systemd-networkd[1418]: eth0: Gained IPv6LL Sep 8 23:47:39.391077 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:47:39.394265 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:47:39.406742 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:47:39.410973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:47:39.414224 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:47:39.451777 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:47:39.452101 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:47:39.454210 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:47:39.463864 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:47:39.502219 systemd[1567]: Queued start job for default target default.target. Sep 8 23:47:39.523077 systemd[1567]: Created slice app.slice - User Application Slice. Sep 8 23:47:39.523112 systemd[1567]: Reached target paths.target - Paths. Sep 8 23:47:39.523160 systemd[1567]: Reached target timers.target - Timers. Sep 8 23:47:39.527522 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:47:39.540850 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:47:39.541629 systemd[1567]: Reached target sockets.target - Sockets. Sep 8 23:47:39.541698 systemd[1567]: Reached target basic.target - Basic System. Sep 8 23:47:39.541759 systemd[1567]: Reached target default.target - Main User Target. Sep 8 23:47:39.541807 systemd[1567]: Startup finished in 256ms. Sep 8 23:47:39.542072 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:47:39.546198 tar[1503]: linux-amd64/README.md Sep 8 23:47:39.556609 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:47:39.574953 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 8 23:47:39.626909 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:42812.service - OpenSSH per-connection server daemon (10.0.0.1:42812). Sep 8 23:47:39.669806 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 42812 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:47:39.671642 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:39.676488 systemd-logind[1487]: New session 2 of user core. Sep 8 23:47:39.714573 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:47:39.773576 sshd[1604]: Connection closed by 10.0.0.1 port 42812 Sep 8 23:47:39.775043 sshd-session[1602]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:39.806447 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:42812.service: Deactivated successfully. Sep 8 23:47:39.808471 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:47:39.809400 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:47:39.838831 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:42824.service - OpenSSH per-connection server daemon (10.0.0.1:42824). Sep 8 23:47:39.841587 systemd-logind[1487]: Removed session 2. Sep 8 23:47:39.878236 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 42824 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:47:39.880029 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:39.886247 systemd-logind[1487]: New session 3 of user core. Sep 8 23:47:39.893501 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:47:39.969368 sshd[1612]: Connection closed by 10.0.0.1 port 42824 Sep 8 23:47:39.970092 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:39.975064 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:42824.service: Deactivated successfully. Sep 8 23:47:39.977860 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:47:39.978817 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:47:39.980173 systemd-logind[1487]: Removed session 3. Sep 8 23:47:41.060604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:47:41.133524 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:47:41.134105 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:47:41.135128 systemd[1]: Startup finished in 803ms (kernel) + 11.881s (initrd) + 6.719s (userspace) = 19.404s. Sep 8 23:47:42.400521 kubelet[1622]: E0908 23:47:42.400415 1622 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:47:42.406197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:47:42.406483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:47:42.407193 systemd[1]: kubelet.service: Consumed 2.411s CPU time, 273.1M memory peak. Sep 8 23:47:49.991632 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:46864.service - OpenSSH per-connection server daemon (10.0.0.1:46864). Sep 8 23:47:50.033164 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 46864 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:47:50.035074 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:50.041005 systemd-logind[1487]: New session 4 of user core. Sep 8 23:47:50.050454 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:47:50.106615 sshd[1637]: Connection closed by 10.0.0.1 port 46864 Sep 8 23:47:50.107122 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:50.119228 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:46864.service: Deactivated successfully. Sep 8 23:47:50.121731 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:47:50.123639 systemd-logind[1487]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:47:50.135859 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:46880.service - OpenSSH per-connection server daemon (10.0.0.1:46880). Sep 8 23:47:50.137132 systemd-logind[1487]: Removed session 4. Sep 8 23:47:50.182815 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 46880 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:47:50.184935 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:50.191745 systemd-logind[1487]: New session 5 of user core. Sep 8 23:47:50.200626 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:47:50.254477 sshd[1645]: Connection closed by 10.0.0.1 port 46880 Sep 8 23:47:50.254925 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:50.270043 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:46880.service: Deactivated successfully. Sep 8 23:47:50.272892 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:47:50.275164 systemd-logind[1487]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:47:50.293595 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:46888.service - OpenSSH per-connection server daemon (10.0.0.1:46888). Sep 8 23:47:50.295680 systemd-logind[1487]: Removed session 5. Sep 8 23:47:50.334078 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 46888 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:47:50.335967 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:50.341321 systemd-logind[1487]: New session 6 of user core. Sep 8 23:47:50.352693 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:47:50.410849 sshd[1653]: Connection closed by 10.0.0.1 port 46888 Sep 8 23:47:50.411412 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:50.428541 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:46888.service: Deactivated successfully. Sep 8 23:47:50.430663 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:47:50.432161 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:47:50.433878 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:46892.service - OpenSSH per-connection server daemon (10.0.0.1:46892). Sep 8 23:47:50.434819 systemd-logind[1487]: Removed session 6. Sep 8 23:47:50.480543 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 46892 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:47:50.482627 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:50.492099 systemd-logind[1487]: New session 7 of user core. Sep 8 23:47:50.501588 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:47:50.567784 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 8 23:47:50.568129 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:47:50.588628 sudo[1662]: pam_unix(sudo:session): session closed for user root Sep 8 23:47:50.590748 sshd[1661]: Connection closed by 10.0.0.1 port 46892 Sep 8 23:47:50.591406 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:50.605727 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:46892.service: Deactivated successfully. Sep 8 23:47:50.608058 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:47:50.609992 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:47:50.611682 systemd[1]: Started sshd@7-10.0.0.19:22-10.0.0.1:46896.service - OpenSSH per-connection server daemon (10.0.0.1:46896). Sep 8 23:47:50.612489 systemd-logind[1487]: Removed session 7. Sep 8 23:47:50.662571 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 46896 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:47:50.664481 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:50.669471 systemd-logind[1487]: New session 8 of user core. Sep 8 23:47:50.679441 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 8 23:47:50.735973 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 8 23:47:50.736359 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:47:50.740871 sudo[1672]: pam_unix(sudo:session): session closed for user root Sep 8 23:47:50.747556 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 8 23:47:50.747943 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:47:50.772646 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:47:50.816036 augenrules[1694]: No rules Sep 8 23:47:50.818098 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:47:50.818444 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:47:50.820046 sudo[1671]: pam_unix(sudo:session): session closed for user root Sep 8 23:47:50.821857 sshd[1670]: Connection closed by 10.0.0.1 port 46896 Sep 8 23:47:50.822362 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:50.835669 systemd[1]: sshd@7-10.0.0.19:22-10.0.0.1:46896.service: Deactivated successfully. Sep 8 23:47:50.837807 systemd[1]: session-8.scope: Deactivated successfully. Sep 8 23:47:50.839553 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Sep 8 23:47:50.852836 systemd[1]: Started sshd@8-10.0.0.19:22-10.0.0.1:46900.service - OpenSSH per-connection server daemon (10.0.0.1:46900). Sep 8 23:47:50.854185 systemd-logind[1487]: Removed session 8. Sep 8 23:47:50.891077 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 46900 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:47:50.893271 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:50.898913 systemd-logind[1487]: New session 9 of user core. Sep 8 23:47:50.909530 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 8 23:47:50.966857 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:47:50.967258 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:47:51.528781 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 8 23:47:51.529030 (dockerd)[1727]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 8 23:47:52.266000 dockerd[1727]: time="2025-09-08T23:47:52.265907036Z" level=info msg="Starting up" Sep 8 23:47:52.541079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 8 23:47:52.554563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:47:52.769768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:47:52.774913 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:47:52.876253 kubelet[1757]: E0908 23:47:52.876015 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:47:52.883446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:47:52.883711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:47:52.884170 systemd[1]: kubelet.service: Consumed 326ms CPU time, 112.5M memory peak. Sep 8 23:47:53.831830 dockerd[1727]: time="2025-09-08T23:47:53.831707931Z" level=info msg="Loading containers: start." Sep 8 23:47:54.083934 kernel: Initializing XFRM netlink socket Sep 8 23:47:54.189760 systemd-networkd[1418]: docker0: Link UP Sep 8 23:47:54.235920 dockerd[1727]: time="2025-09-08T23:47:54.235842267Z" level=info msg="Loading containers: done." Sep 8 23:47:54.253386 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2642459494-merged.mount: Deactivated successfully. Sep 8 23:47:54.255229 dockerd[1727]: time="2025-09-08T23:47:54.255129738Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 8 23:47:54.255462 dockerd[1727]: time="2025-09-08T23:47:54.255363577Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 8 23:47:54.255576 dockerd[1727]: time="2025-09-08T23:47:54.255541180Z" level=info msg="Daemon has completed initialization" Sep 8 23:47:54.345010 dockerd[1727]: time="2025-09-08T23:47:54.344871937Z" level=info msg="API listen on /run/docker.sock" Sep 8 23:47:54.345178 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 8 23:47:55.571377 containerd[1512]: time="2025-09-08T23:47:55.571310497Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 8 23:47:56.261408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119475483.mount: Deactivated successfully. Sep 8 23:48:03.041177 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 8 23:48:03.052466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:48:03.236805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:48:03.243784 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:48:03.328721 kubelet[1989]: E0908 23:48:03.328484 1989 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:48:03.332872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:48:03.333108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:48:03.333561 systemd[1]: kubelet.service: Consumed 290ms CPU time, 112.5M memory peak. Sep 8 23:48:04.633810 containerd[1512]: time="2025-09-08T23:48:04.633740498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:04.634672 containerd[1512]: time="2025-09-08T23:48:04.634618865Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 8 23:48:04.636258 containerd[1512]: time="2025-09-08T23:48:04.636192977Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:04.641887 containerd[1512]: time="2025-09-08T23:48:04.641841560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:04.643172 containerd[1512]: time="2025-09-08T23:48:04.643110550Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 9.071745301s" Sep 8 23:48:04.643172 containerd[1512]: time="2025-09-08T23:48:04.643168359Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 8 23:48:04.644248 containerd[1512]: time="2025-09-08T23:48:04.644211645Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 8 23:48:10.127317 containerd[1512]: time="2025-09-08T23:48:10.127225343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:10.151857 containerd[1512]: time="2025-09-08T23:48:10.151797925Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 8 23:48:10.199692 containerd[1512]: time="2025-09-08T23:48:10.199648242Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:10.305427 containerd[1512]: time="2025-09-08T23:48:10.305384655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:10.307342 containerd[1512]: time="2025-09-08T23:48:10.307251566Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 5.662977594s" Sep 8 23:48:10.307415 containerd[1512]: time="2025-09-08T23:48:10.307342868Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 8 23:48:10.308166 containerd[1512]: time="2025-09-08T23:48:10.308039093Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 8 23:48:13.055072 containerd[1512]: time="2025-09-08T23:48:13.054959229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:13.082462 containerd[1512]: time="2025-09-08T23:48:13.082365437Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 8 23:48:13.100984 containerd[1512]: time="2025-09-08T23:48:13.100938002Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:13.122812 containerd[1512]: time="2025-09-08T23:48:13.122749143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:13.124056 containerd[1512]: time="2025-09-08T23:48:13.124020086Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 2.8158947s" Sep 8 23:48:13.124112 containerd[1512]: time="2025-09-08T23:48:13.124060835Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 8 23:48:13.124851 containerd[1512]: time="2025-09-08T23:48:13.124629768Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 8 23:48:13.541475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 8 23:48:13.579670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:48:13.768873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:48:13.773435 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:48:14.024967 kubelet[2025]: E0908 23:48:14.024752 2025 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:48:14.030423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:48:14.030690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:48:14.031180 systemd[1]: kubelet.service: Consumed 450ms CPU time, 113M memory peak. Sep 8 23:48:14.867395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982521972.mount: Deactivated successfully. Sep 8 23:48:17.469313 containerd[1512]: time="2025-09-08T23:48:17.469186118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:17.530009 containerd[1512]: time="2025-09-08T23:48:17.529920818Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 8 23:48:17.563882 containerd[1512]: time="2025-09-08T23:48:17.563800308Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:17.637272 containerd[1512]: time="2025-09-08T23:48:17.637203967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:17.638035 containerd[1512]: time="2025-09-08T23:48:17.637969180Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 4.513306569s" Sep 8 23:48:17.638035 containerd[1512]: time="2025-09-08T23:48:17.638024325Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 8 23:48:17.638654 containerd[1512]: time="2025-09-08T23:48:17.638624904Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 8 23:48:20.083713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870364123.mount: Deactivated successfully. Sep 8 23:48:23.731642 update_engine[1489]: I20250908 23:48:23.731490 1489 update_attempter.cc:509] Updating boot flags... Sep 8 23:48:23.780319 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2073) Sep 8 23:48:23.866339 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2072) Sep 8 23:48:23.921343 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2072) Sep 8 23:48:24.041204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 8 23:48:24.050488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:48:24.223061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:48:24.228780 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:48:24.278487 kubelet[2094]: E0908 23:48:24.278404 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:48:24.283255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:48:24.283495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:48:24.283891 systemd[1]: kubelet.service: Consumed 229ms CPU time, 110.4M memory peak. Sep 8 23:48:25.098161 containerd[1512]: time="2025-09-08T23:48:25.098066702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:25.161893 containerd[1512]: time="2025-09-08T23:48:25.161733403Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 8 23:48:25.215015 containerd[1512]: time="2025-09-08T23:48:25.214925792Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:25.307528 containerd[1512]: time="2025-09-08T23:48:25.307460066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:25.308885 containerd[1512]: time="2025-09-08T23:48:25.308843600Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 7.670169283s" Sep 8 23:48:25.308885 containerd[1512]: time="2025-09-08T23:48:25.308880110Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 8 23:48:25.309632 containerd[1512]: time="2025-09-08T23:48:25.309586860Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 8 23:48:26.876835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617557408.mount: Deactivated successfully. Sep 8 23:48:26.885669 containerd[1512]: time="2025-09-08T23:48:26.885604715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:26.886473 containerd[1512]: time="2025-09-08T23:48:26.886408599Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 8 23:48:26.888122 containerd[1512]: time="2025-09-08T23:48:26.888079817Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:26.890803 containerd[1512]: time="2025-09-08T23:48:26.890745601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:26.891708 containerd[1512]: time="2025-09-08T23:48:26.891493489Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.58186012s" Sep 8 23:48:26.891708 containerd[1512]: time="2025-09-08T23:48:26.891532813Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 8 23:48:26.892422 containerd[1512]: time="2025-09-08T23:48:26.892398724Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 8 23:48:27.342916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2570086446.mount: Deactivated successfully. Sep 8 23:48:30.597886 containerd[1512]: time="2025-09-08T23:48:30.597778738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:30.599677 containerd[1512]: time="2025-09-08T23:48:30.599085238Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 8 23:48:30.600697 containerd[1512]: time="2025-09-08T23:48:30.600654165Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:30.604452 containerd[1512]: time="2025-09-08T23:48:30.604416550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:48:30.605870 containerd[1512]: time="2025-09-08T23:48:30.605825003Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.713396492s" Sep 8 23:48:30.605870 containerd[1512]: time="2025-09-08T23:48:30.605861072Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 8 23:48:34.291227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 8 23:48:34.302547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:48:34.505490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:48:34.510434 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:48:34.556420 kubelet[2216]: E0908 23:48:34.556137 2216 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:48:34.563413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:48:34.563686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:48:34.564210 systemd[1]: kubelet.service: Consumed 242ms CPU time, 110.6M memory peak. Sep 8 23:48:35.387684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:48:35.387935 systemd[1]: kubelet.service: Consumed 242ms CPU time, 110.6M memory peak. Sep 8 23:48:35.398518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:48:35.428728 systemd[1]: Reload requested from client PID 2232 ('systemctl') (unit session-9.scope)... Sep 8 23:48:35.428754 systemd[1]: Reloading... Sep 8 23:48:35.618401 zram_generator::config[2282]: No configuration found. Sep 8 23:48:38.805635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:48:38.914310 systemd[1]: Reloading finished in 3485 ms. Sep 8 23:48:38.981833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:48:38.988405 (kubelet)[2314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:48:38.989997 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:48:38.990342 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:48:38.990630 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:48:38.990679 systemd[1]: kubelet.service: Consumed 188ms CPU time, 98.3M memory peak. Sep 8 23:48:38.993837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:48:39.798121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:48:39.802754 (kubelet)[2327]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:48:39.972450 kubelet[2327]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:48:39.972450 kubelet[2327]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:48:39.972450 kubelet[2327]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:48:39.972956 kubelet[2327]: I0908 23:48:39.972552 2327 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:48:42.422631 kubelet[2327]: I0908 23:48:42.422564 2327 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 8 23:48:42.422631 kubelet[2327]: I0908 23:48:42.422610 2327 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:48:42.423162 kubelet[2327]: I0908 23:48:42.422839 2327 server.go:956] "Client rotation is on, will bootstrap in background" Sep 8 23:48:42.527780 kubelet[2327]: I0908 23:48:42.527713 2327 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:48:42.532161 kubelet[2327]: E0908 23:48:42.532124 2327 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 8 23:48:42.532792 kubelet[2327]: E0908 23:48:42.532749 2327 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:48:42.532792 kubelet[2327]: I0908 23:48:42.532790 2327 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:48:42.539287 kubelet[2327]: I0908 23:48:42.539259 2327 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:48:42.539597 kubelet[2327]: I0908 23:48:42.539553 2327 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:48:42.539773 kubelet[2327]: I0908 23:48:42.539586 2327 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:48:42.539918 kubelet[2327]: I0908 23:48:42.539783 2327 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:48:42.539918 kubelet[2327]: I0908 23:48:42.539797 2327 container_manager_linux.go:303] "Creating device plugin manager" Sep 8 23:48:42.540003 kubelet[2327]: I0908 23:48:42.539968 2327 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:48:42.542590 kubelet[2327]: I0908 23:48:42.542546 2327 kubelet.go:480] "Attempting to sync node with API server" Sep 8 23:48:42.542645 kubelet[2327]: I0908 23:48:42.542600 2327 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:48:42.542645 kubelet[2327]: I0908 23:48:42.542632 2327 kubelet.go:386] "Adding apiserver pod source" Sep 8 23:48:42.544697 kubelet[2327]: I0908 23:48:42.544676 2327 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:48:42.547397 kubelet[2327]: E0908 23:48:42.547364 2327 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 8 23:48:42.547397 kubelet[2327]: E0908 23:48:42.547253 2327 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 8 23:48:42.550960 kubelet[2327]: I0908 23:48:42.550936 2327 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:48:42.551450 kubelet[2327]: I0908 23:48:42.551427 2327 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 8 23:48:42.551991 kubelet[2327]: W0908 23:48:42.551970 2327 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:48:42.554989 kubelet[2327]: I0908 23:48:42.554958 2327 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:48:42.555073 kubelet[2327]: I0908 23:48:42.555029 2327 server.go:1289] "Started kubelet" Sep 8 23:48:42.556252 kubelet[2327]: I0908 23:48:42.555488 2327 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:48:42.556793 kubelet[2327]: I0908 23:48:42.556756 2327 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:48:42.556849 kubelet[2327]: I0908 23:48:42.556820 2327 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:48:42.556849 kubelet[2327]: I0908 23:48:42.556834 2327 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:48:42.557786 kubelet[2327]: I0908 23:48:42.557749 2327 server.go:317] "Adding debug handlers to kubelet server" Sep 8 23:48:42.559758 kubelet[2327]: I0908 23:48:42.559527 2327 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:48:42.562008 kubelet[2327]: E0908 23:48:42.561984 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:42.562076 kubelet[2327]: I0908 23:48:42.562023 2327 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:48:42.562176 kubelet[2327]: I0908 23:48:42.562156 2327 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:48:42.562257 kubelet[2327]: I0908 23:48:42.562240 2327 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:48:42.562606 kubelet[2327]: E0908 23:48:42.562559 2327 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 8 23:48:42.562606 kubelet[2327]: E0908 23:48:42.562549 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="200ms" Sep 8 23:48:42.563204 kubelet[2327]: I0908 23:48:42.563162 2327 factory.go:223] Registration of the systemd container factory successfully Sep 8 23:48:42.563889 kubelet[2327]: I0908 23:48:42.563378 2327 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:48:42.563889 kubelet[2327]: E0908 23:48:42.563802 2327 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:48:42.564569 kubelet[2327]: I0908 23:48:42.564547 2327 factory.go:223] Registration of the containerd container factory successfully Sep 8 23:48:42.683953 kubelet[2327]: E0908 23:48:42.682462 2327 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863738b7ded5ba9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:48:42.554981289 +0000 UTC m=+2.626277150,LastTimestamp:2025-09-08 23:48:42.554981289 +0000 UTC m=+2.626277150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:48:42.692035 kubelet[2327]: E0908 23:48:42.690753 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:42.695665 kubelet[2327]: I0908 23:48:42.695494 2327 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 8 23:48:42.697413 kubelet[2327]: I0908 23:48:42.697383 2327 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 8 23:48:42.697413 kubelet[2327]: I0908 23:48:42.697410 2327 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 8 23:48:42.697501 kubelet[2327]: I0908 23:48:42.697436 2327 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:48:42.697501 kubelet[2327]: I0908 23:48:42.697447 2327 kubelet.go:2436] "Starting kubelet main sync loop" Sep 8 23:48:42.697763 kubelet[2327]: E0908 23:48:42.697496 2327 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:48:42.699959 kubelet[2327]: I0908 23:48:42.699932 2327 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:48:42.700008 kubelet[2327]: I0908 23:48:42.699959 2327 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:48:42.700008 kubelet[2327]: I0908 23:48:42.699988 2327 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:48:42.700670 kubelet[2327]: E0908 23:48:42.700646 2327 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 8 23:48:42.762441 kubelet[2327]: I0908 23:48:42.762386 2327 policy_none.go:49] "None policy: Start" Sep 8 23:48:42.762441 kubelet[2327]: I0908 23:48:42.762426 2327 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:48:42.762441 kubelet[2327]: I0908 23:48:42.762445 2327 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:48:42.763494 kubelet[2327]: E0908 23:48:42.763446 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="400ms" Sep 8 23:48:42.791430 kubelet[2327]: E0908 23:48:42.791404 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:42.798613 kubelet[2327]: E0908 23:48:42.798568 2327 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:48:42.892486 kubelet[2327]: E0908 23:48:42.892414 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:42.913498 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:48:42.929211 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:48:42.932380 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:48:42.943382 kubelet[2327]: E0908 23:48:42.943279 2327 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 8 23:48:42.943657 kubelet[2327]: I0908 23:48:42.943590 2327 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:48:42.943657 kubelet[2327]: I0908 23:48:42.943610 2327 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:48:42.943927 kubelet[2327]: I0908 23:48:42.943906 2327 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:48:42.945013 kubelet[2327]: E0908 23:48:42.944989 2327 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:48:42.945079 kubelet[2327]: E0908 23:48:42.945045 2327 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 8 23:48:43.045875 kubelet[2327]: I0908 23:48:43.045815 2327 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:48:43.046349 kubelet[2327]: E0908 23:48:43.046282 2327 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Sep 8 23:48:43.093896 kubelet[2327]: I0908 23:48:43.093863 2327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbc491df5bdd33c0b951ce57a70cc062-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dbc491df5bdd33c0b951ce57a70cc062\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:48:43.093991 kubelet[2327]: I0908 23:48:43.093900 2327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbc491df5bdd33c0b951ce57a70cc062-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dbc491df5bdd33c0b951ce57a70cc062\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:48:43.093991 kubelet[2327]: I0908 23:48:43.093925 2327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbc491df5bdd33c0b951ce57a70cc062-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dbc491df5bdd33c0b951ce57a70cc062\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:48:43.164904 kubelet[2327]: E0908 23:48:43.164842 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="800ms" Sep 8 23:48:43.197910 systemd[1]: Created slice kubepods-burstable-poddbc491df5bdd33c0b951ce57a70cc062.slice - libcontainer container kubepods-burstable-poddbc491df5bdd33c0b951ce57a70cc062.slice. Sep 8 23:48:43.210122 kubelet[2327]: E0908 23:48:43.210101 2327 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:48:43.210440 kubelet[2327]: E0908 23:48:43.210413 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:43.210917 containerd[1512]: time="2025-09-08T23:48:43.210881816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dbc491df5bdd33c0b951ce57a70cc062,Namespace:kube-system,Attempt:0,}" Sep 8 23:48:43.248449 kubelet[2327]: I0908 23:48:43.248417 2327 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:48:43.248859 kubelet[2327]: E0908 23:48:43.248819 2327 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Sep 8 23:48:43.269942 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 8 23:48:43.271634 kubelet[2327]: E0908 23:48:43.271604 2327 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:48:43.295887 kubelet[2327]: I0908 23:48:43.295821 2327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:43.295887 kubelet[2327]: I0908 23:48:43.295884 2327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:43.296045 kubelet[2327]: I0908 23:48:43.295925 2327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:43.296045 kubelet[2327]: I0908 23:48:43.295950 2327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:43.296045 kubelet[2327]: I0908 23:48:43.295971 2327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:43.396233 kubelet[2327]: I0908 23:48:43.396192 2327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:48:43.493230 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 8 23:48:43.495055 kubelet[2327]: E0908 23:48:43.495036 2327 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:48:43.572704 kubelet[2327]: E0908 23:48:43.572651 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:43.573196 containerd[1512]: time="2025-09-08T23:48:43.573161630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 8 23:48:43.583795 kubelet[2327]: E0908 23:48:43.583741 2327 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 8 23:48:43.650489 kubelet[2327]: I0908 23:48:43.650457 2327 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:48:43.650836 kubelet[2327]: E0908 23:48:43.650807 2327 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Sep 8 23:48:43.795793 kubelet[2327]: E0908 23:48:43.795765 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:43.796378 containerd[1512]: time="2025-09-08T23:48:43.796188731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 8 23:48:43.925441 kubelet[2327]: E0908 23:48:43.925395 2327 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 8 23:48:43.965616 kubelet[2327]: E0908 23:48:43.965562 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="1.6s" Sep 8 23:48:44.145580 kubelet[2327]: E0908 23:48:44.145454 2327 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 8 23:48:44.235869 kubelet[2327]: E0908 23:48:44.235803 2327 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 8 23:48:44.452320 kubelet[2327]: I0908 23:48:44.452163 2327 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:48:44.452570 kubelet[2327]: E0908 23:48:44.452515 2327 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Sep 8 23:48:44.578390 kubelet[2327]: E0908 23:48:44.578342 2327 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 8 23:48:45.528849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596636739.mount: Deactivated successfully. Sep 8 23:48:45.566694 kubelet[2327]: E0908 23:48:45.566599 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="3.2s" Sep 8 23:48:45.604203 kubelet[2327]: E0908 23:48:45.604148 2327 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 8 23:48:46.054779 kubelet[2327]: I0908 23:48:46.054730 2327 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:48:46.055183 kubelet[2327]: E0908 23:48:46.055147 2327 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Sep 8 23:48:46.100431 containerd[1512]: time="2025-09-08T23:48:46.100350296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:48:46.263671 containerd[1512]: time="2025-09-08T23:48:46.263511947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 8 23:48:46.275488 containerd[1512]: time="2025-09-08T23:48:46.275410698Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:48:46.278804 containerd[1512]: time="2025-09-08T23:48:46.278696688Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:48:46.281310 containerd[1512]: time="2025-09-08T23:48:46.281247908Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:48:46.283432 containerd[1512]: time="2025-09-08T23:48:46.283388995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:48:46.285399 containerd[1512]: time="2025-09-08T23:48:46.285361094Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:48:46.291039 containerd[1512]: time="2025-09-08T23:48:46.291004308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:48:46.292287 containerd[1512]: time="2025-09-08T23:48:46.292241175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.081249893s" Sep 8 23:48:46.293192 containerd[1512]: time="2025-09-08T23:48:46.293157047Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.496869329s" Sep 8 23:48:46.296271 containerd[1512]: time="2025-09-08T23:48:46.296229057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.722970083s" Sep 8 23:48:46.408401 kubelet[2327]: E0908 23:48:46.408178 2327 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 8 23:48:46.505942 containerd[1512]: time="2025-09-08T23:48:46.504835585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:48:46.505942 containerd[1512]: time="2025-09-08T23:48:46.504922038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:48:46.505942 containerd[1512]: time="2025-09-08T23:48:46.504942116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:48:46.505942 containerd[1512]: time="2025-09-08T23:48:46.505053214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:48:46.505942 containerd[1512]: time="2025-09-08T23:48:46.504555379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:48:46.505942 containerd[1512]: time="2025-09-08T23:48:46.504638955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:48:46.505942 containerd[1512]: time="2025-09-08T23:48:46.504653923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:48:46.505942 containerd[1512]: time="2025-09-08T23:48:46.504774691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:48:46.506350 containerd[1512]: time="2025-09-08T23:48:46.504265062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:48:46.506604 containerd[1512]: time="2025-09-08T23:48:46.506427229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:48:46.506709 containerd[1512]: time="2025-09-08T23:48:46.506452005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:48:46.507168 containerd[1512]: time="2025-09-08T23:48:46.507087631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:48:46.549164 systemd[1]: run-containerd-runc-k8s.io-bcf3af011939e3459fda6a15281ecf892ada9a3b5c2c3c43ce0179bd2ed5cc50-runc.zPsRU3.mount: Deactivated successfully. Sep 8 23:48:46.560779 systemd[1]: Started cri-containerd-bcf3af011939e3459fda6a15281ecf892ada9a3b5c2c3c43ce0179bd2ed5cc50.scope - libcontainer container bcf3af011939e3459fda6a15281ecf892ada9a3b5c2c3c43ce0179bd2ed5cc50. Sep 8 23:48:46.576090 kubelet[2327]: E0908 23:48:46.576034 2327 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 8 23:48:46.581671 systemd[1]: Started cri-containerd-37c06da63b7b2f895cc84beaee5a6c814099c52cdbd3cb78e2935600400d676c.scope - libcontainer container 37c06da63b7b2f895cc84beaee5a6c814099c52cdbd3cb78e2935600400d676c. Sep 8 23:48:46.584788 systemd[1]: Started cri-containerd-c97f4b0edba8264e7556a0d9d324a63202a3d41dfb94adff2e20424629c64254.scope - libcontainer container c97f4b0edba8264e7556a0d9d324a63202a3d41dfb94adff2e20424629c64254. Sep 8 23:48:46.667142 containerd[1512]: time="2025-09-08T23:48:46.666975734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"37c06da63b7b2f895cc84beaee5a6c814099c52cdbd3cb78e2935600400d676c\"" Sep 8 23:48:46.669559 kubelet[2327]: E0908 23:48:46.669522 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:46.675004 containerd[1512]: time="2025-09-08T23:48:46.674675306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dbc491df5bdd33c0b951ce57a70cc062,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcf3af011939e3459fda6a15281ecf892ada9a3b5c2c3c43ce0179bd2ed5cc50\"" Sep 8 23:48:46.675466 kubelet[2327]: E0908 23:48:46.675403 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:46.677705 containerd[1512]: time="2025-09-08T23:48:46.677566524Z" level=info msg="CreateContainer within sandbox \"37c06da63b7b2f895cc84beaee5a6c814099c52cdbd3cb78e2935600400d676c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 8 23:48:46.681459 containerd[1512]: time="2025-09-08T23:48:46.681204457Z" level=info msg="CreateContainer within sandbox \"bcf3af011939e3459fda6a15281ecf892ada9a3b5c2c3c43ce0179bd2ed5cc50\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 8 23:48:46.686763 containerd[1512]: time="2025-09-08T23:48:46.686721484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c97f4b0edba8264e7556a0d9d324a63202a3d41dfb94adff2e20424629c64254\"" Sep 8 23:48:46.687804 kubelet[2327]: E0908 23:48:46.687775 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:46.695137 containerd[1512]: time="2025-09-08T23:48:46.695089915Z" level=info msg="CreateContainer within sandbox \"c97f4b0edba8264e7556a0d9d324a63202a3d41dfb94adff2e20424629c64254\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 8 23:48:46.702265 containerd[1512]: time="2025-09-08T23:48:46.702207152Z" level=info msg="CreateContainer within sandbox \"37c06da63b7b2f895cc84beaee5a6c814099c52cdbd3cb78e2935600400d676c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"18595beb48edcc6620416ed28e31d4cfb72194cf1ef0118eb9e5614ab475b07d\"" Sep 8 23:48:46.702943 containerd[1512]: time="2025-09-08T23:48:46.702915665Z" level=info msg="StartContainer for \"18595beb48edcc6620416ed28e31d4cfb72194cf1ef0118eb9e5614ab475b07d\"" Sep 8 23:48:46.711336 containerd[1512]: time="2025-09-08T23:48:46.711035779Z" level=info msg="CreateContainer within sandbox \"bcf3af011939e3459fda6a15281ecf892ada9a3b5c2c3c43ce0179bd2ed5cc50\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7a3bd8eac05e21683a8286ce61fc8252f26e3b2ad0b702138ed8f106e12b9221\"" Sep 8 23:48:46.711776 containerd[1512]: time="2025-09-08T23:48:46.711744452Z" level=info msg="StartContainer for \"7a3bd8eac05e21683a8286ce61fc8252f26e3b2ad0b702138ed8f106e12b9221\"" Sep 8 23:48:46.723998 containerd[1512]: time="2025-09-08T23:48:46.723907580Z" level=info msg="CreateContainer within sandbox \"c97f4b0edba8264e7556a0d9d324a63202a3d41dfb94adff2e20424629c64254\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5071c0df422f90b5f1b9191bbc2d736a264af30ac0dc31e373db529fa256eb65\"" Sep 8 23:48:46.724556 containerd[1512]: time="2025-09-08T23:48:46.724531473Z" level=info msg="StartContainer for \"5071c0df422f90b5f1b9191bbc2d736a264af30ac0dc31e373db529fa256eb65\"" Sep 8 23:48:46.734959 systemd[1]: Started cri-containerd-18595beb48edcc6620416ed28e31d4cfb72194cf1ef0118eb9e5614ab475b07d.scope - libcontainer container 18595beb48edcc6620416ed28e31d4cfb72194cf1ef0118eb9e5614ab475b07d. Sep 8 23:48:46.754511 systemd[1]: Started cri-containerd-7a3bd8eac05e21683a8286ce61fc8252f26e3b2ad0b702138ed8f106e12b9221.scope - libcontainer container 7a3bd8eac05e21683a8286ce61fc8252f26e3b2ad0b702138ed8f106e12b9221. Sep 8 23:48:46.760675 systemd[1]: Started cri-containerd-5071c0df422f90b5f1b9191bbc2d736a264af30ac0dc31e373db529fa256eb65.scope - libcontainer container 5071c0df422f90b5f1b9191bbc2d736a264af30ac0dc31e373db529fa256eb65. Sep 8 23:48:46.803186 containerd[1512]: time="2025-09-08T23:48:46.803008185Z" level=info msg="StartContainer for \"18595beb48edcc6620416ed28e31d4cfb72194cf1ef0118eb9e5614ab475b07d\" returns successfully" Sep 8 23:48:46.848010 containerd[1512]: time="2025-09-08T23:48:46.847612534Z" level=info msg="StartContainer for \"7a3bd8eac05e21683a8286ce61fc8252f26e3b2ad0b702138ed8f106e12b9221\" returns successfully" Sep 8 23:48:46.848010 containerd[1512]: time="2025-09-08T23:48:46.847738080Z" level=info msg="StartContainer for \"5071c0df422f90b5f1b9191bbc2d736a264af30ac0dc31e373db529fa256eb65\" returns successfully" Sep 8 23:48:46.980455 kubelet[2327]: E0908 23:48:46.979807 2327 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 8 23:48:47.717978 kubelet[2327]: E0908 23:48:47.717712 2327 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:48:47.717978 kubelet[2327]: E0908 23:48:47.717840 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:47.720540 kubelet[2327]: E0908 23:48:47.720514 2327 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:48:47.720611 kubelet[2327]: E0908 23:48:47.720602 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:47.721582 kubelet[2327]: E0908 23:48:47.721564 2327 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:48:47.721669 kubelet[2327]: E0908 23:48:47.721654 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:48.726071 kubelet[2327]: E0908 23:48:48.726027 2327 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:48:48.726642 kubelet[2327]: E0908 23:48:48.726165 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:48.726642 kubelet[2327]: E0908 23:48:48.726393 2327 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:48:48.726642 kubelet[2327]: E0908 23:48:48.726483 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:48.726764 kubelet[2327]: E0908 23:48:48.726711 2327 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:48:48.728681 kubelet[2327]: E0908 23:48:48.726833 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:48.899363 kubelet[2327]: E0908 23:48:48.899317 2327 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 8 23:48:49.068899 kubelet[2327]: E0908 23:48:49.068642 2327 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1863738b7ded5ba9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:48:42.554981289 +0000 UTC m=+2.626277150,LastTimestamp:2025-09-08 23:48:42.554981289 +0000 UTC m=+2.626277150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:48:49.253729 kubelet[2327]: E0908 23:48:49.253660 2327 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 8 23:48:49.256980 kubelet[2327]: I0908 23:48:49.256952 2327 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:48:49.265133 kubelet[2327]: I0908 23:48:49.265089 2327 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:48:49.265133 kubelet[2327]: E0908 23:48:49.265124 2327 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 8 23:48:49.282031 kubelet[2327]: E0908 23:48:49.281880 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:49.383068 kubelet[2327]: E0908 23:48:49.382906 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:49.483716 kubelet[2327]: E0908 23:48:49.483654 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:49.584246 kubelet[2327]: E0908 23:48:49.584150 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:49.685359 kubelet[2327]: E0908 23:48:49.685108 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:49.726357 kubelet[2327]: E0908 23:48:49.726319 2327 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:48:49.726888 kubelet[2327]: E0908 23:48:49.726484 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:49.726888 kubelet[2327]: E0908 23:48:49.726537 2327 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:48:49.726888 kubelet[2327]: E0908 23:48:49.726685 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:49.786021 kubelet[2327]: E0908 23:48:49.785958 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:49.886469 kubelet[2327]: E0908 23:48:49.886391 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:49.987685 kubelet[2327]: E0908 23:48:49.987536 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:50.088713 kubelet[2327]: E0908 23:48:50.088650 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:50.189289 kubelet[2327]: E0908 23:48:50.189225 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:50.290445 kubelet[2327]: E0908 23:48:50.290261 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:50.390962 kubelet[2327]: E0908 23:48:50.390903 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:50.491659 kubelet[2327]: E0908 23:48:50.491592 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:50.592831 kubelet[2327]: E0908 23:48:50.592657 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:50.693561 kubelet[2327]: E0908 23:48:50.693486 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:50.729109 kubelet[2327]: E0908 23:48:50.729069 2327 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:48:50.729618 kubelet[2327]: E0908 23:48:50.729224 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:50.794194 kubelet[2327]: E0908 23:48:50.794126 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:50.895097 kubelet[2327]: E0908 23:48:50.894931 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:50.996173 kubelet[2327]: E0908 23:48:50.996085 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:51.096268 kubelet[2327]: E0908 23:48:51.096207 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:51.197053 kubelet[2327]: E0908 23:48:51.196896 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:51.297800 kubelet[2327]: E0908 23:48:51.297732 2327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:48:51.362462 kubelet[2327]: I0908 23:48:51.362395 2327 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:48:51.402497 kubelet[2327]: I0908 23:48:51.402440 2327 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:51.428227 kubelet[2327]: I0908 23:48:51.428132 2327 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:48:51.551460 kubelet[2327]: I0908 23:48:51.551417 2327 apiserver.go:52] "Watching apiserver" Sep 8 23:48:51.560212 kubelet[2327]: E0908 23:48:51.560185 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:51.560343 kubelet[2327]: E0908 23:48:51.560245 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:51.562812 kubelet[2327]: I0908 23:48:51.562756 2327 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:48:51.729361 kubelet[2327]: E0908 23:48:51.729278 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:51.729942 kubelet[2327]: E0908 23:48:51.729676 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:52.076768 kubelet[2327]: E0908 23:48:52.076723 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:52.786617 kubelet[2327]: I0908 23:48:52.786537 2327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.786507228 podStartE2EDuration="1.786507228s" podCreationTimestamp="2025-09-08 23:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:48:52.74335578 +0000 UTC m=+12.814651641" watchObservedRunningTime="2025-09-08 23:48:52.786507228 +0000 UTC m=+12.857803089" Sep 8 23:48:52.887219 kubelet[2327]: I0908 23:48:52.887144 2327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.887126017 podStartE2EDuration="1.887126017s" podCreationTimestamp="2025-09-08 23:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:48:52.786461573 +0000 UTC m=+12.857757434" watchObservedRunningTime="2025-09-08 23:48:52.887126017 +0000 UTC m=+12.958421878" Sep 8 23:48:52.887219 kubelet[2327]: I0908 23:48:52.887226 2327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8872226589999999 podStartE2EDuration="1.887222659s" podCreationTimestamp="2025-09-08 23:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:48:52.88711699 +0000 UTC m=+12.958412851" watchObservedRunningTime="2025-09-08 23:48:52.887222659 +0000 UTC m=+12.958518520" Sep 8 23:48:53.994799 systemd[1]: Reload requested from client PID 2622 ('systemctl') (unit session-9.scope)... Sep 8 23:48:53.994819 systemd[1]: Reloading... Sep 8 23:48:54.094439 zram_generator::config[2669]: No configuration found. Sep 8 23:48:54.269867 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:48:54.393615 systemd[1]: Reloading finished in 398 ms. Sep 8 23:48:54.423390 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:48:54.451995 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:48:54.452407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:48:54.452474 systemd[1]: kubelet.service: Consumed 1.976s CPU time, 135.3M memory peak. Sep 8 23:48:54.461528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:48:54.646497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:48:54.651534 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:48:54.697368 kubelet[2711]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:48:54.697368 kubelet[2711]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:48:54.697368 kubelet[2711]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:48:54.697939 kubelet[2711]: I0908 23:48:54.697447 2711 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:48:54.707069 kubelet[2711]: I0908 23:48:54.707023 2711 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 8 23:48:54.707069 kubelet[2711]: I0908 23:48:54.707059 2711 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:48:54.707446 kubelet[2711]: I0908 23:48:54.707424 2711 server.go:956] "Client rotation is on, will bootstrap in background" Sep 8 23:48:54.708850 kubelet[2711]: I0908 23:48:54.708810 2711 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 8 23:48:54.712236 kubelet[2711]: I0908 23:48:54.712191 2711 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:48:54.716065 kubelet[2711]: E0908 23:48:54.716002 2711 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:48:54.716065 kubelet[2711]: I0908 23:48:54.716050 2711 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:48:54.725262 kubelet[2711]: I0908 23:48:54.725150 2711 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:48:54.725696 kubelet[2711]: I0908 23:48:54.725667 2711 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:48:54.726045 kubelet[2711]: I0908 23:48:54.725756 2711 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:48:54.726045 kubelet[2711]: I0908 23:48:54.725936 2711 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:48:54.726045 kubelet[2711]: I0908 23:48:54.725946 2711 container_manager_linux.go:303] "Creating device plugin manager" Sep 8 23:48:54.726045 kubelet[2711]: I0908 23:48:54.725995 2711 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:48:54.726546 kubelet[2711]: I0908 23:48:54.726484 2711 kubelet.go:480] "Attempting to sync node with API server" Sep 8 23:48:54.726546 kubelet[2711]: I0908 23:48:54.726508 2711 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:48:54.726546 kubelet[2711]: I0908 23:48:54.726539 2711 kubelet.go:386] "Adding apiserver pod source" Sep 8 23:48:54.726546 kubelet[2711]: I0908 23:48:54.726561 2711 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:48:54.732339 kubelet[2711]: I0908 23:48:54.731485 2711 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:48:54.732339 kubelet[2711]: I0908 23:48:54.732038 2711 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 8 23:48:54.740685 kubelet[2711]: I0908 23:48:54.740514 2711 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:48:54.740685 kubelet[2711]: I0908 23:48:54.740586 2711 server.go:1289] "Started kubelet" Sep 8 23:48:54.742401 kubelet[2711]: I0908 23:48:54.740901 2711 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:48:54.742401 kubelet[2711]: I0908 23:48:54.740933 2711 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:48:54.742401 kubelet[2711]: I0908 23:48:54.741445 2711 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:48:54.741277 sudo[2727]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 8 23:48:54.741713 sudo[2727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 8 23:48:54.743806 kubelet[2711]: I0908 23:48:54.743624 2711 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:48:54.745184 kubelet[2711]: E0908 23:48:54.744324 2711 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:48:54.745184 kubelet[2711]: I0908 23:48:54.744899 2711 server.go:317] "Adding debug handlers to kubelet server" Sep 8 23:48:54.745708 kubelet[2711]: I0908 23:48:54.745683 2711 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:48:54.748090 kubelet[2711]: I0908 23:48:54.747865 2711 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:48:54.748090 kubelet[2711]: I0908 23:48:54.748036 2711 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:48:54.751401 kubelet[2711]: I0908 23:48:54.750772 2711 factory.go:223] Registration of the systemd container factory successfully Sep 8 23:48:54.751401 kubelet[2711]: I0908 23:48:54.750949 2711 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:48:54.751401 kubelet[2711]: I0908 23:48:54.751049 2711 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:48:54.754470 kubelet[2711]: I0908 23:48:54.754433 2711 factory.go:223] Registration of the containerd container factory successfully Sep 8 23:48:54.770538 kubelet[2711]: I0908 23:48:54.770492 2711 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 8 23:48:54.773622 kubelet[2711]: I0908 23:48:54.773409 2711 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 8 23:48:54.773622 kubelet[2711]: I0908 23:48:54.773461 2711 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 8 23:48:54.773622 kubelet[2711]: I0908 23:48:54.773490 2711 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:48:54.773622 kubelet[2711]: I0908 23:48:54.773501 2711 kubelet.go:2436] "Starting kubelet main sync loop" Sep 8 23:48:54.773622 kubelet[2711]: E0908 23:48:54.773565 2711 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:48:54.800873 kubelet[2711]: I0908 23:48:54.800844 2711 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:48:54.801157 kubelet[2711]: I0908 23:48:54.801139 2711 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:48:54.801241 kubelet[2711]: I0908 23:48:54.801228 2711 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:48:54.801510 kubelet[2711]: I0908 23:48:54.801489 2711 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 8 23:48:54.801856 kubelet[2711]: I0908 23:48:54.801592 2711 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 8 23:48:54.801856 kubelet[2711]: I0908 23:48:54.801623 2711 policy_none.go:49] "None policy: Start" Sep 8 23:48:54.801856 kubelet[2711]: I0908 23:48:54.801636 2711 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:48:54.801856 kubelet[2711]: I0908 23:48:54.801650 2711 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:48:54.801856 kubelet[2711]: I0908 23:48:54.801775 2711 state_mem.go:75] "Updated machine memory state" Sep 8 23:48:54.806335 kubelet[2711]: E0908 23:48:54.806318 2711 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 8 23:48:54.806576 kubelet[2711]: I0908 23:48:54.806562 2711 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:48:54.806656 kubelet[2711]: I0908 23:48:54.806631 2711 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:48:54.807113 kubelet[2711]: I0908 23:48:54.807092 2711 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:48:54.809763 kubelet[2711]: E0908 23:48:54.809676 2711 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:48:54.874624 kubelet[2711]: I0908 23:48:54.874573 2711 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:48:54.875059 kubelet[2711]: I0908 23:48:54.875039 2711 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:48:54.875398 kubelet[2711]: I0908 23:48:54.875370 2711 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:54.915990 kubelet[2711]: I0908 23:48:54.915853 2711 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:48:54.920086 kubelet[2711]: E0908 23:48:54.920026 2711 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 8 23:48:54.952672 kubelet[2711]: I0908 23:48:54.952524 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbc491df5bdd33c0b951ce57a70cc062-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dbc491df5bdd33c0b951ce57a70cc062\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:48:54.952672 kubelet[2711]: I0908 23:48:54.952580 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:54.952672 kubelet[2711]: I0908 23:48:54.952603 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:54.952672 kubelet[2711]: I0908 23:48:54.952626 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:54.952945 kubelet[2711]: I0908 23:48:54.952697 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:48:54.952945 kubelet[2711]: I0908 23:48:54.952742 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbc491df5bdd33c0b951ce57a70cc062-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dbc491df5bdd33c0b951ce57a70cc062\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:48:54.952945 kubelet[2711]: I0908 23:48:54.952795 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbc491df5bdd33c0b951ce57a70cc062-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dbc491df5bdd33c0b951ce57a70cc062\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:48:54.952945 kubelet[2711]: I0908 23:48:54.952839 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:54.952945 kubelet[2711]: I0908 23:48:54.952858 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:54.961255 kubelet[2711]: E0908 23:48:54.961206 2711 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:48:54.994207 kubelet[2711]: E0908 23:48:54.993672 2711 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:48:54.994976 kubelet[2711]: I0908 23:48:54.994949 2711 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 8 23:48:54.995032 kubelet[2711]: I0908 23:48:54.995023 2711 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:48:55.220655 kubelet[2711]: E0908 23:48:55.220425 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:55.230145 sudo[2727]: pam_unix(sudo:session): session closed for user root Sep 8 23:48:55.262592 kubelet[2711]: E0908 23:48:55.262493 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:55.296856 kubelet[2711]: E0908 23:48:55.296766 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:55.727800 kubelet[2711]: I0908 23:48:55.727732 2711 apiserver.go:52] "Watching apiserver" Sep 8 23:48:55.748286 kubelet[2711]: I0908 23:48:55.748221 2711 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:48:55.788090 kubelet[2711]: I0908 23:48:55.787241 2711 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:48:55.788090 kubelet[2711]: E0908 23:48:55.787360 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:55.788090 kubelet[2711]: E0908 23:48:55.788000 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:55.792035 kubelet[2711]: E0908 23:48:55.792010 2711 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 8 23:48:55.792146 kubelet[2711]: E0908 23:48:55.792132 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:56.567124 sudo[1707]: pam_unix(sudo:session): session closed for user root Sep 8 23:48:56.568766 sshd[1706]: Connection closed by 10.0.0.1 port 46900 Sep 8 23:48:56.569679 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:56.574829 systemd[1]: sshd@8-10.0.0.19:22-10.0.0.1:46900.service: Deactivated successfully. Sep 8 23:48:56.577681 systemd[1]: session-9.scope: Deactivated successfully. Sep 8 23:48:56.577914 systemd[1]: session-9.scope: Consumed 7.072s CPU time, 251.7M memory peak. Sep 8 23:48:56.579483 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Sep 8 23:48:56.580427 systemd-logind[1487]: Removed session 9. Sep 8 23:48:56.788983 kubelet[2711]: E0908 23:48:56.788947 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:56.789519 kubelet[2711]: E0908 23:48:56.789004 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:58.521324 kubelet[2711]: I0908 23:48:58.521249 2711 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 8 23:48:58.521893 containerd[1512]: time="2025-09-08T23:48:58.521725485Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:48:58.522171 kubelet[2711]: I0908 23:48:58.521979 2711 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 8 23:48:58.564052 kubelet[2711]: E0908 23:48:58.564009 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:58.787111 systemd[1]: Created slice kubepods-besteffort-podc212c5f6_4c66_4317_81a2_de57c89c194c.slice - libcontainer container kubepods-besteffort-podc212c5f6_4c66_4317_81a2_de57c89c194c.slice. Sep 8 23:48:58.792634 kubelet[2711]: E0908 23:48:58.792152 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:58.810841 systemd[1]: Created slice kubepods-burstable-pod7cb5ce2e_2283_4545_9a13_59b22f04c5ea.slice - libcontainer container kubepods-burstable-pod7cb5ce2e_2283_4545_9a13_59b22f04c5ea.slice. Sep 8 23:48:58.876561 kubelet[2711]: I0908 23:48:58.876491 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c212c5f6-4c66-4317-81a2-de57c89c194c-kube-proxy\") pod \"kube-proxy-t6x4t\" (UID: \"c212c5f6-4c66-4317-81a2-de57c89c194c\") " pod="kube-system/kube-proxy-t6x4t" Sep 8 23:48:58.876561 kubelet[2711]: I0908 23:48:58.876552 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c212c5f6-4c66-4317-81a2-de57c89c194c-lib-modules\") pod \"kube-proxy-t6x4t\" (UID: \"c212c5f6-4c66-4317-81a2-de57c89c194c\") " pod="kube-system/kube-proxy-t6x4t" Sep 8 23:48:58.876561 kubelet[2711]: I0908 23:48:58.876573 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz478\" (UniqueName: \"kubernetes.io/projected/c212c5f6-4c66-4317-81a2-de57c89c194c-kube-api-access-kz478\") pod \"kube-proxy-t6x4t\" (UID: \"c212c5f6-4c66-4317-81a2-de57c89c194c\") " pod="kube-system/kube-proxy-t6x4t" Sep 8 23:48:58.876833 kubelet[2711]: I0908 23:48:58.876592 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-bpf-maps\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.876833 kubelet[2711]: I0908 23:48:58.876609 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cni-path\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.876833 kubelet[2711]: I0908 23:48:58.876624 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c212c5f6-4c66-4317-81a2-de57c89c194c-xtables-lock\") pod \"kube-proxy-t6x4t\" (UID: \"c212c5f6-4c66-4317-81a2-de57c89c194c\") " pod="kube-system/kube-proxy-t6x4t" Sep 8 23:48:58.876833 kubelet[2711]: I0908 23:48:58.876640 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-etc-cni-netd\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.876833 kubelet[2711]: I0908 23:48:58.876656 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-clustermesh-secrets\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.876833 kubelet[2711]: I0908 23:48:58.876672 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cilium-config-path\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.877031 kubelet[2711]: I0908 23:48:58.876688 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-host-proc-sys-kernel\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.877031 kubelet[2711]: I0908 23:48:58.876704 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-hubble-tls\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.877031 kubelet[2711]: I0908 23:48:58.876719 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bjv\" (UniqueName: \"kubernetes.io/projected/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-kube-api-access-79bjv\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.877031 kubelet[2711]: I0908 23:48:58.876741 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-xtables-lock\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.877031 kubelet[2711]: I0908 23:48:58.876757 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-hostproc\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.877031 kubelet[2711]: I0908 23:48:58.876775 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cilium-cgroup\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.877228 kubelet[2711]: I0908 23:48:58.876805 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-lib-modules\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.877228 kubelet[2711]: I0908 23:48:58.876829 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-host-proc-sys-net\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:58.877228 kubelet[2711]: I0908 23:48:58.876847 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cilium-run\") pod \"cilium-n2lrw\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " pod="kube-system/cilium-n2lrw" Sep 8 23:48:59.172633 kubelet[2711]: E0908 23:48:59.172593 2711 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 8 23:48:59.172633 kubelet[2711]: E0908 23:48:59.172625 2711 projected.go:194] Error preparing data for projected volume kube-api-access-kz478 for pod kube-system/kube-proxy-t6x4t: configmap "kube-root-ca.crt" not found Sep 8 23:48:59.172948 kubelet[2711]: E0908 23:48:59.172708 2711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c212c5f6-4c66-4317-81a2-de57c89c194c-kube-api-access-kz478 podName:c212c5f6-4c66-4317-81a2-de57c89c194c nodeName:}" failed. No retries permitted until 2025-09-08 23:48:59.67268042 +0000 UTC m=+5.015970605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kz478" (UniqueName: "kubernetes.io/projected/c212c5f6-4c66-4317-81a2-de57c89c194c-kube-api-access-kz478") pod "kube-proxy-t6x4t" (UID: "c212c5f6-4c66-4317-81a2-de57c89c194c") : configmap "kube-root-ca.crt" not found Sep 8 23:48:59.173781 kubelet[2711]: E0908 23:48:59.173756 2711 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 8 23:48:59.173854 kubelet[2711]: E0908 23:48:59.173783 2711 projected.go:194] Error preparing data for projected volume kube-api-access-79bjv for pod kube-system/cilium-n2lrw: configmap "kube-root-ca.crt" not found Sep 8 23:48:59.173882 kubelet[2711]: E0908 23:48:59.173857 2711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-kube-api-access-79bjv podName:7cb5ce2e-2283-4545-9a13-59b22f04c5ea nodeName:}" failed. No retries permitted until 2025-09-08 23:48:59.673843875 +0000 UTC m=+5.017134050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-79bjv" (UniqueName: "kubernetes.io/projected/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-kube-api-access-79bjv") pod "cilium-n2lrw" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea") : configmap "kube-root-ca.crt" not found Sep 8 23:48:59.683362 kubelet[2711]: E0908 23:48:59.683276 2711 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 8 23:48:59.683362 kubelet[2711]: E0908 23:48:59.683333 2711 projected.go:194] Error preparing data for projected volume kube-api-access-79bjv for pod kube-system/cilium-n2lrw: configmap "kube-root-ca.crt" not found Sep 8 23:48:59.683362 kubelet[2711]: E0908 23:48:59.683386 2711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-kube-api-access-79bjv podName:7cb5ce2e-2283-4545-9a13-59b22f04c5ea nodeName:}" failed. No retries permitted until 2025-09-08 23:49:00.683370443 +0000 UTC m=+6.026660608 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-79bjv" (UniqueName: "kubernetes.io/projected/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-kube-api-access-79bjv") pod "cilium-n2lrw" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea") : configmap "kube-root-ca.crt" not found Sep 8 23:48:59.684030 kubelet[2711]: E0908 23:48:59.683283 2711 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 8 23:48:59.684030 kubelet[2711]: E0908 23:48:59.683423 2711 projected.go:194] Error preparing data for projected volume kube-api-access-kz478 for pod kube-system/kube-proxy-t6x4t: configmap "kube-root-ca.crt" not found Sep 8 23:48:59.684030 kubelet[2711]: E0908 23:48:59.683479 2711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c212c5f6-4c66-4317-81a2-de57c89c194c-kube-api-access-kz478 podName:c212c5f6-4c66-4317-81a2-de57c89c194c nodeName:}" failed. No retries permitted until 2025-09-08 23:49:00.68345981 +0000 UTC m=+6.026749985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kz478" (UniqueName: "kubernetes.io/projected/c212c5f6-4c66-4317-81a2-de57c89c194c-kube-api-access-kz478") pod "kube-proxy-t6x4t" (UID: "c212c5f6-4c66-4317-81a2-de57c89c194c") : configmap "kube-root-ca.crt" not found Sep 8 23:49:00.673872 systemd[1]: Created slice kubepods-besteffort-podb473829b_4236_4ab3_99ad_eaadd6471914.slice - libcontainer container kubepods-besteffort-podb473829b_4236_4ab3_99ad_eaadd6471914.slice. Sep 8 23:49:00.689548 kubelet[2711]: I0908 23:49:00.689497 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b473829b-4236-4ab3-99ad-eaadd6471914-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vhl9l\" (UID: \"b473829b-4236-4ab3-99ad-eaadd6471914\") " pod="kube-system/cilium-operator-6c4d7847fc-vhl9l" Sep 8 23:49:00.689947 kubelet[2711]: I0908 23:49:00.689560 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxxz5\" (UniqueName: \"kubernetes.io/projected/b473829b-4236-4ab3-99ad-eaadd6471914-kube-api-access-cxxz5\") pod \"cilium-operator-6c4d7847fc-vhl9l\" (UID: \"b473829b-4236-4ab3-99ad-eaadd6471914\") " pod="kube-system/cilium-operator-6c4d7847fc-vhl9l" Sep 8 23:49:00.904938 kubelet[2711]: E0908 23:49:00.904849 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:00.905736 containerd[1512]: time="2025-09-08T23:49:00.905657136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t6x4t,Uid:c212c5f6-4c66-4317-81a2-de57c89c194c,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:00.914005 kubelet[2711]: E0908 23:49:00.913968 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:00.914556 containerd[1512]: time="2025-09-08T23:49:00.914521179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n2lrw,Uid:7cb5ce2e-2283-4545-9a13-59b22f04c5ea,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:00.978216 kubelet[2711]: E0908 23:49:00.978003 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:00.978727 containerd[1512]: time="2025-09-08T23:49:00.978678753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vhl9l,Uid:b473829b-4236-4ab3-99ad-eaadd6471914,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:01.243427 containerd[1512]: time="2025-09-08T23:49:01.241574569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:49:01.243427 containerd[1512]: time="2025-09-08T23:49:01.241660590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:49:01.243427 containerd[1512]: time="2025-09-08T23:49:01.241706917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:49:01.243427 containerd[1512]: time="2025-09-08T23:49:01.241820150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:49:01.248481 containerd[1512]: time="2025-09-08T23:49:01.248201962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:49:01.248481 containerd[1512]: time="2025-09-08T23:49:01.248265922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:49:01.248481 containerd[1512]: time="2025-09-08T23:49:01.248280218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:49:01.248481 containerd[1512]: time="2025-09-08T23:49:01.248384785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:49:01.260533 containerd[1512]: time="2025-09-08T23:49:01.259134567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:49:01.260533 containerd[1512]: time="2025-09-08T23:49:01.259198067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:49:01.260533 containerd[1512]: time="2025-09-08T23:49:01.259212815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:49:01.260878 containerd[1512]: time="2025-09-08T23:49:01.260789615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:49:01.270561 systemd[1]: Started cri-containerd-1bcb71ca9fe37b40872b8cfb2fc4ddb864c908032b002c639928c958ba577eaa.scope - libcontainer container 1bcb71ca9fe37b40872b8cfb2fc4ddb864c908032b002c639928c958ba577eaa. Sep 8 23:49:01.276664 systemd[1]: Started cri-containerd-7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce.scope - libcontainer container 7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce. Sep 8 23:49:01.286168 systemd[1]: Started cri-containerd-6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52.scope - libcontainer container 6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52. Sep 8 23:49:01.309920 containerd[1512]: time="2025-09-08T23:49:01.309874211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n2lrw,Uid:7cb5ce2e-2283-4545-9a13-59b22f04c5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\"" Sep 8 23:49:01.310733 kubelet[2711]: E0908 23:49:01.310708 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:01.314835 containerd[1512]: time="2025-09-08T23:49:01.314717444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t6x4t,Uid:c212c5f6-4c66-4317-81a2-de57c89c194c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bcb71ca9fe37b40872b8cfb2fc4ddb864c908032b002c639928c958ba577eaa\"" Sep 8 23:49:01.314944 containerd[1512]: time="2025-09-08T23:49:01.314908893Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 8 23:49:01.316140 kubelet[2711]: E0908 23:49:01.316078 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:01.324711 containerd[1512]: time="2025-09-08T23:49:01.324647727Z" level=info msg="CreateContainer within sandbox \"1bcb71ca9fe37b40872b8cfb2fc4ddb864c908032b002c639928c958ba577eaa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:49:01.349844 containerd[1512]: time="2025-09-08T23:49:01.349778520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vhl9l,Uid:b473829b-4236-4ab3-99ad-eaadd6471914,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\"" Sep 8 23:49:01.350596 kubelet[2711]: E0908 23:49:01.350558 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:01.352842 containerd[1512]: time="2025-09-08T23:49:01.352783362Z" level=info msg="CreateContainer within sandbox \"1bcb71ca9fe37b40872b8cfb2fc4ddb864c908032b002c639928c958ba577eaa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f97c12a4103b29bd94f5cba608e14a16e1bd7a6e70cbe641499d29ec0ede3b4e\"" Sep 8 23:49:01.353519 containerd[1512]: time="2025-09-08T23:49:01.353482153Z" level=info msg="StartContainer for \"f97c12a4103b29bd94f5cba608e14a16e1bd7a6e70cbe641499d29ec0ede3b4e\"" Sep 8 23:49:01.389520 systemd[1]: Started cri-containerd-f97c12a4103b29bd94f5cba608e14a16e1bd7a6e70cbe641499d29ec0ede3b4e.scope - libcontainer container f97c12a4103b29bd94f5cba608e14a16e1bd7a6e70cbe641499d29ec0ede3b4e. Sep 8 23:49:01.433473 containerd[1512]: time="2025-09-08T23:49:01.433415507Z" level=info msg="StartContainer for \"f97c12a4103b29bd94f5cba608e14a16e1bd7a6e70cbe641499d29ec0ede3b4e\" returns successfully" Sep 8 23:49:01.799273 kubelet[2711]: E0908 23:49:01.799221 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:01.955334 kubelet[2711]: I0908 23:49:01.955211 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t6x4t" podStartSLOduration=3.95518995 podStartE2EDuration="3.95518995s" podCreationTimestamp="2025-09-08 23:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:49:01.954532916 +0000 UTC m=+7.297823091" watchObservedRunningTime="2025-09-08 23:49:01.95518995 +0000 UTC m=+7.298480115" Sep 8 23:49:03.672155 kubelet[2711]: E0908 23:49:03.672114 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:03.802809 kubelet[2711]: E0908 23:49:03.802762 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:03.825887 kubelet[2711]: E0908 23:49:03.825834 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:04.804689 kubelet[2711]: E0908 23:49:04.804644 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:04.805191 kubelet[2711]: E0908 23:49:04.804911 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:12.680248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027382952.mount: Deactivated successfully. Sep 8 23:49:22.131869 containerd[1512]: time="2025-09-08T23:49:22.131777087Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:22.167440 containerd[1512]: time="2025-09-08T23:49:22.167329789Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 8 23:49:22.187244 containerd[1512]: time="2025-09-08T23:49:22.187148219Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:22.189659 containerd[1512]: time="2025-09-08T23:49:22.189592733Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 20.874640578s" Sep 8 23:49:22.189659 containerd[1512]: time="2025-09-08T23:49:22.189648711Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 8 23:49:22.200141 containerd[1512]: time="2025-09-08T23:49:22.199840996Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 8 23:49:22.288883 containerd[1512]: time="2025-09-08T23:49:22.288809651Z" level=info msg="CreateContainer within sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:49:23.529359 containerd[1512]: time="2025-09-08T23:49:23.529279410Z" level=info msg="CreateContainer within sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6\"" Sep 8 23:49:23.530000 containerd[1512]: time="2025-09-08T23:49:23.529948801Z" level=info msg="StartContainer for \"2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6\"" Sep 8 23:49:23.574584 systemd[1]: Started cri-containerd-2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6.scope - libcontainer container 2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6. Sep 8 23:49:23.605735 containerd[1512]: time="2025-09-08T23:49:23.605676963Z" level=info msg="StartContainer for \"2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6\" returns successfully" Sep 8 23:49:23.622320 systemd[1]: cri-containerd-2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6.scope: Deactivated successfully. Sep 8 23:49:23.915524 kubelet[2711]: E0908 23:49:23.915425 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:24.348072 containerd[1512]: time="2025-09-08T23:49:24.347937162Z" level=info msg="shim disconnected" id=2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6 namespace=k8s.io Sep 8 23:49:24.348072 containerd[1512]: time="2025-09-08T23:49:24.348064497Z" level=warning msg="cleaning up after shim disconnected" id=2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6 namespace=k8s.io Sep 8 23:49:24.348072 containerd[1512]: time="2025-09-08T23:49:24.348083063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:49:24.513428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6-rootfs.mount: Deactivated successfully. Sep 8 23:49:24.914856 kubelet[2711]: E0908 23:49:24.914792 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:25.062134 containerd[1512]: time="2025-09-08T23:49:25.062082790Z" level=info msg="CreateContainer within sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:49:25.539657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611920439.mount: Deactivated successfully. Sep 8 23:49:25.718579 containerd[1512]: time="2025-09-08T23:49:25.718508799Z" level=info msg="CreateContainer within sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02\"" Sep 8 23:49:25.719220 containerd[1512]: time="2025-09-08T23:49:25.719157427Z" level=info msg="StartContainer for \"2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02\"" Sep 8 23:49:25.752471 systemd[1]: Started cri-containerd-2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02.scope - libcontainer container 2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02. Sep 8 23:49:25.884993 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:49:25.885266 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:49:25.885494 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:49:25.891910 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:49:25.894736 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:49:25.895632 systemd[1]: cri-containerd-2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02.scope: Deactivated successfully. Sep 8 23:49:25.905865 containerd[1512]: time="2025-09-08T23:49:25.905779315Z" level=info msg="StartContainer for \"2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02\" returns successfully" Sep 8 23:49:25.911738 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:49:26.110022 containerd[1512]: time="2025-09-08T23:49:26.109947455Z" level=info msg="shim disconnected" id=2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02 namespace=k8s.io Sep 8 23:49:26.110022 containerd[1512]: time="2025-09-08T23:49:26.110007661Z" level=warning msg="cleaning up after shim disconnected" id=2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02 namespace=k8s.io Sep 8 23:49:26.110022 containerd[1512]: time="2025-09-08T23:49:26.110016548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:49:26.537438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02-rootfs.mount: Deactivated successfully. Sep 8 23:49:26.912532 kubelet[2711]: E0908 23:49:26.912494 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:26.977859 containerd[1512]: time="2025-09-08T23:49:26.977569178Z" level=info msg="CreateContainer within sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:49:27.477317 containerd[1512]: time="2025-09-08T23:49:27.477232850Z" level=info msg="CreateContainer within sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee\"" Sep 8 23:49:27.477775 containerd[1512]: time="2025-09-08T23:49:27.477748411Z" level=info msg="StartContainer for \"e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee\"" Sep 8 23:49:27.522623 systemd[1]: Started cri-containerd-e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee.scope - libcontainer container e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee. Sep 8 23:49:27.565928 systemd[1]: cri-containerd-e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee.scope: Deactivated successfully. Sep 8 23:49:27.571828 containerd[1512]: time="2025-09-08T23:49:27.571777851Z" level=info msg="StartContainer for \"e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee\" returns successfully" Sep 8 23:49:27.593201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee-rootfs.mount: Deactivated successfully. Sep 8 23:49:27.621416 containerd[1512]: time="2025-09-08T23:49:27.621340247Z" level=info msg="shim disconnected" id=e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee namespace=k8s.io Sep 8 23:49:27.621416 containerd[1512]: time="2025-09-08T23:49:27.621404851Z" level=warning msg="cleaning up after shim disconnected" id=e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee namespace=k8s.io Sep 8 23:49:27.621416 containerd[1512]: time="2025-09-08T23:49:27.621414279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:49:27.921425 kubelet[2711]: E0908 23:49:27.921387 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:28.146750 containerd[1512]: time="2025-09-08T23:49:28.146518925Z" level=info msg="CreateContainer within sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:49:28.287021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981145983.mount: Deactivated successfully. Sep 8 23:49:28.468601 containerd[1512]: time="2025-09-08T23:49:28.468520825Z" level=info msg="CreateContainer within sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4\"" Sep 8 23:49:28.469246 containerd[1512]: time="2025-09-08T23:49:28.469132911Z" level=info msg="StartContainer for \"87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4\"" Sep 8 23:49:28.505559 systemd[1]: Started cri-containerd-87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4.scope - libcontainer container 87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4. Sep 8 23:49:28.535060 systemd[1]: cri-containerd-87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4.scope: Deactivated successfully. Sep 8 23:49:28.846990 containerd[1512]: time="2025-09-08T23:49:28.846931839Z" level=info msg="StartContainer for \"87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4\" returns successfully" Sep 8 23:49:28.866577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4-rootfs.mount: Deactivated successfully. Sep 8 23:49:28.924907 kubelet[2711]: E0908 23:49:28.924560 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:29.304051 containerd[1512]: time="2025-09-08T23:49:29.303999055Z" level=info msg="shim disconnected" id=87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4 namespace=k8s.io Sep 8 23:49:29.304051 containerd[1512]: time="2025-09-08T23:49:29.304049872Z" level=warning msg="cleaning up after shim disconnected" id=87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4 namespace=k8s.io Sep 8 23:49:29.304286 containerd[1512]: time="2025-09-08T23:49:29.304058159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:49:29.928197 kubelet[2711]: E0908 23:49:29.928157 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:30.233113 containerd[1512]: time="2025-09-08T23:49:30.232966072Z" level=info msg="CreateContainer within sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:49:31.108210 containerd[1512]: time="2025-09-08T23:49:31.108145645Z" level=info msg="CreateContainer within sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\"" Sep 8 23:49:31.109475 containerd[1512]: time="2025-09-08T23:49:31.109417563Z" level=info msg="StartContainer for \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\"" Sep 8 23:49:31.144607 systemd[1]: Started cri-containerd-09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8.scope - libcontainer container 09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8. Sep 8 23:49:31.198513 systemd[1]: Started sshd@9-10.0.0.19:22-10.0.0.1:34816.service - OpenSSH per-connection server daemon (10.0.0.1:34816). Sep 8 23:49:31.250858 containerd[1512]: time="2025-09-08T23:49:31.250784803Z" level=info msg="StartContainer for \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\" returns successfully" Sep 8 23:49:31.309493 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 34816 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:49:31.310772 sshd-session[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:49:31.320683 systemd-logind[1487]: New session 10 of user core. Sep 8 23:49:31.328543 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 8 23:49:31.437903 kubelet[2711]: I0908 23:49:31.437756 2711 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 8 23:49:31.934372 kubelet[2711]: E0908 23:49:31.934337 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:32.182711 sshd[3444]: Connection closed by 10.0.0.1 port 34816 Sep 8 23:49:32.183098 sshd-session[3399]: pam_unix(sshd:session): session closed for user core Sep 8 23:49:32.188179 systemd[1]: sshd@9-10.0.0.19:22-10.0.0.1:34816.service: Deactivated successfully. Sep 8 23:49:32.190591 systemd[1]: session-10.scope: Deactivated successfully. Sep 8 23:49:32.191516 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Sep 8 23:49:32.192602 systemd-logind[1487]: Removed session 10. Sep 8 23:49:32.232603 systemd[1]: Created slice kubepods-burstable-pod7f41949a_1540_4b72_972b_0b7f5957cac3.slice - libcontainer container kubepods-burstable-pod7f41949a_1540_4b72_972b_0b7f5957cac3.slice. Sep 8 23:49:32.274253 kubelet[2711]: I0908 23:49:32.274141 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n2lrw" podStartSLOduration=13.386276161 podStartE2EDuration="34.274116172s" podCreationTimestamp="2025-09-08 23:48:58 +0000 UTC" firstStartedPulling="2025-09-08 23:49:01.311763167 +0000 UTC m=+6.655053332" lastFinishedPulling="2025-09-08 23:49:22.199603168 +0000 UTC m=+27.542893343" observedRunningTime="2025-09-08 23:49:32.273484782 +0000 UTC m=+37.616774957" watchObservedRunningTime="2025-09-08 23:49:32.274116172 +0000 UTC m=+37.617406357" Sep 8 23:49:32.313443 kubelet[2711]: I0908 23:49:32.313382 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f41949a-1540-4b72-972b-0b7f5957cac3-config-volume\") pod \"coredns-674b8bbfcf-cvznj\" (UID: \"7f41949a-1540-4b72-972b-0b7f5957cac3\") " pod="kube-system/coredns-674b8bbfcf-cvznj" Sep 8 23:49:32.313443 kubelet[2711]: I0908 23:49:32.313435 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zddxm\" (UniqueName: \"kubernetes.io/projected/7f41949a-1540-4b72-972b-0b7f5957cac3-kube-api-access-zddxm\") pod \"coredns-674b8bbfcf-cvznj\" (UID: \"7f41949a-1540-4b72-972b-0b7f5957cac3\") " pod="kube-system/coredns-674b8bbfcf-cvznj" Sep 8 23:49:32.405596 systemd[1]: Created slice kubepods-burstable-pod8da4532b_9e19_4697_93b7_9e10ad31abec.slice - libcontainer container kubepods-burstable-pod8da4532b_9e19_4697_93b7_9e10ad31abec.slice. Sep 8 23:49:32.515129 kubelet[2711]: I0908 23:49:32.514974 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdvqh\" (UniqueName: \"kubernetes.io/projected/8da4532b-9e19-4697-93b7-9e10ad31abec-kube-api-access-xdvqh\") pod \"coredns-674b8bbfcf-xvqgc\" (UID: \"8da4532b-9e19-4697-93b7-9e10ad31abec\") " pod="kube-system/coredns-674b8bbfcf-xvqgc" Sep 8 23:49:32.515129 kubelet[2711]: I0908 23:49:32.515022 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8da4532b-9e19-4697-93b7-9e10ad31abec-config-volume\") pod \"coredns-674b8bbfcf-xvqgc\" (UID: \"8da4532b-9e19-4697-93b7-9e10ad31abec\") " pod="kube-system/coredns-674b8bbfcf-xvqgc" Sep 8 23:49:32.536205 kubelet[2711]: E0908 23:49:32.536153 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:32.551761 containerd[1512]: time="2025-09-08T23:49:32.551694438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cvznj,Uid:7f41949a-1540-4b72-972b-0b7f5957cac3,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:32.710587 kubelet[2711]: E0908 23:49:32.710520 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:32.711192 containerd[1512]: time="2025-09-08T23:49:32.711136347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xvqgc,Uid:8da4532b-9e19-4697-93b7-9e10ad31abec,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:32.882143 containerd[1512]: time="2025-09-08T23:49:32.882075995Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:32.890830 containerd[1512]: time="2025-09-08T23:49:32.890642948Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 8 23:49:32.898276 containerd[1512]: time="2025-09-08T23:49:32.898209185Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:32.906813 containerd[1512]: time="2025-09-08T23:49:32.906733157Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 10.70683986s" Sep 8 23:49:32.906813 containerd[1512]: time="2025-09-08T23:49:32.906792071Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 8 23:49:32.918469 containerd[1512]: time="2025-09-08T23:49:32.918334907Z" level=info msg="CreateContainer within sandbox \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 8 23:49:32.936312 kubelet[2711]: E0908 23:49:32.936257 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:32.941017 containerd[1512]: time="2025-09-08T23:49:32.940965627Z" level=info msg="CreateContainer within sandbox \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\"" Sep 8 23:49:32.941802 containerd[1512]: time="2025-09-08T23:49:32.941776971Z" level=info msg="StartContainer for \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\"" Sep 8 23:49:33.000552 systemd[1]: Started cri-containerd-ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40.scope - libcontainer container ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40. Sep 8 23:49:33.034020 containerd[1512]: time="2025-09-08T23:49:33.033974064Z" level=info msg="StartContainer for \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\" returns successfully" Sep 8 23:49:33.942229 kubelet[2711]: E0908 23:49:33.942125 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:33.942915 kubelet[2711]: E0908 23:49:33.942545 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:33.953588 kubelet[2711]: I0908 23:49:33.953042 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vhl9l" podStartSLOduration=3.396336015 podStartE2EDuration="34.953022378s" podCreationTimestamp="2025-09-08 23:48:59 +0000 UTC" firstStartedPulling="2025-09-08 23:49:01.351037984 +0000 UTC m=+6.694328159" lastFinishedPulling="2025-09-08 23:49:32.907724347 +0000 UTC m=+38.251014522" observedRunningTime="2025-09-08 23:49:33.95254296 +0000 UTC m=+39.295833135" watchObservedRunningTime="2025-09-08 23:49:33.953022378 +0000 UTC m=+39.296312553" Sep 8 23:49:34.943965 kubelet[2711]: E0908 23:49:34.943920 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:36.910203 systemd-networkd[1418]: cilium_host: Link UP Sep 8 23:49:36.910410 systemd-networkd[1418]: cilium_net: Link UP Sep 8 23:49:36.910618 systemd-networkd[1418]: cilium_net: Gained carrier Sep 8 23:49:36.911508 systemd-networkd[1418]: cilium_host: Gained carrier Sep 8 23:49:36.911932 systemd-networkd[1418]: cilium_net: Gained IPv6LL Sep 8 23:49:37.057021 systemd-networkd[1418]: cilium_vxlan: Link UP Sep 8 23:49:37.057036 systemd-networkd[1418]: cilium_vxlan: Gained carrier Sep 8 23:49:37.208717 systemd[1]: Started sshd@10-10.0.0.19:22-10.0.0.1:34822.service - OpenSSH per-connection server daemon (10.0.0.1:34822). Sep 8 23:49:37.253418 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 34822 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:49:37.254970 sshd-session[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:49:37.261261 systemd-logind[1487]: New session 11 of user core. Sep 8 23:49:37.267830 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 8 23:49:37.311337 kernel: NET: Registered PF_ALG protocol family Sep 8 23:49:37.315532 systemd-networkd[1418]: cilium_host: Gained IPv6LL Sep 8 23:49:37.402267 sshd[3698]: Connection closed by 10.0.0.1 port 34822 Sep 8 23:49:37.402743 sshd-session[3684]: pam_unix(sshd:session): session closed for user core Sep 8 23:49:37.407533 systemd[1]: sshd@10-10.0.0.19:22-10.0.0.1:34822.service: Deactivated successfully. Sep 8 23:49:37.410445 systemd[1]: session-11.scope: Deactivated successfully. Sep 8 23:49:37.411397 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Sep 8 23:49:37.412703 systemd-logind[1487]: Removed session 11. Sep 8 23:49:38.100072 systemd-networkd[1418]: lxc_health: Link UP Sep 8 23:49:38.101535 systemd-networkd[1418]: lxc_health: Gained carrier Sep 8 23:49:38.299561 systemd-networkd[1418]: cilium_vxlan: Gained IPv6LL Sep 8 23:49:38.444402 systemd-networkd[1418]: lxcfb362a24f661: Link UP Sep 8 23:49:38.447350 kernel: eth0: renamed from tmp29229 Sep 8 23:49:38.465358 kernel: eth0: renamed from tmp6509d Sep 8 23:49:38.475033 systemd-networkd[1418]: lxc79dcfba19289: Link UP Sep 8 23:49:38.475268 systemd-networkd[1418]: lxcfb362a24f661: Gained carrier Sep 8 23:49:38.475832 systemd-networkd[1418]: lxc79dcfba19289: Gained carrier Sep 8 23:49:38.916530 kubelet[2711]: E0908 23:49:38.916469 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:38.955799 kubelet[2711]: E0908 23:49:38.955748 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:39.835550 systemd-networkd[1418]: lxcfb362a24f661: Gained IPv6LL Sep 8 23:49:39.957722 kubelet[2711]: E0908 23:49:39.957678 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:40.091537 systemd-networkd[1418]: lxc_health: Gained IPv6LL Sep 8 23:49:40.347514 systemd-networkd[1418]: lxc79dcfba19289: Gained IPv6LL Sep 8 23:49:42.200326 containerd[1512]: time="2025-09-08T23:49:42.199932820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:49:42.200326 containerd[1512]: time="2025-09-08T23:49:42.200065813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:49:42.200326 containerd[1512]: time="2025-09-08T23:49:42.200088937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:49:42.204371 containerd[1512]: time="2025-09-08T23:49:42.200239274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:49:42.229571 systemd[1]: Started cri-containerd-292295f9aaf08f817901b0a749cc6aa9e64bfee88956710a09bef124332eb6cb.scope - libcontainer container 292295f9aaf08f817901b0a749cc6aa9e64bfee88956710a09bef124332eb6cb. Sep 8 23:49:42.241360 containerd[1512]: time="2025-09-08T23:49:42.241216864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:49:42.241360 containerd[1512]: time="2025-09-08T23:49:42.241283210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:49:42.241360 containerd[1512]: time="2025-09-08T23:49:42.241309591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:49:42.241611 containerd[1512]: time="2025-09-08T23:49:42.241392969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:49:42.244208 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:49:42.273567 systemd[1]: Started cri-containerd-6509d970df12e75ccd335f3d91b4dec1e51310f9f156d320a63c3f9ee98f2001.scope - libcontainer container 6509d970df12e75ccd335f3d91b4dec1e51310f9f156d320a63c3f9ee98f2001. Sep 8 23:49:42.275507 containerd[1512]: time="2025-09-08T23:49:42.275465693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cvznj,Uid:7f41949a-1540-4b72-972b-0b7f5957cac3,Namespace:kube-system,Attempt:0,} returns sandbox id \"292295f9aaf08f817901b0a749cc6aa9e64bfee88956710a09bef124332eb6cb\"" Sep 8 23:49:42.276207 kubelet[2711]: E0908 23:49:42.276176 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:42.290553 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:49:42.314213 containerd[1512]: time="2025-09-08T23:49:42.314167923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xvqgc,Uid:8da4532b-9e19-4697-93b7-9e10ad31abec,Namespace:kube-system,Attempt:0,} returns sandbox id \"6509d970df12e75ccd335f3d91b4dec1e51310f9f156d320a63c3f9ee98f2001\"" Sep 8 23:49:42.314984 kubelet[2711]: E0908 23:49:42.314960 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:42.348317 containerd[1512]: time="2025-09-08T23:49:42.348255905Z" level=info msg="CreateContainer within sandbox \"292295f9aaf08f817901b0a749cc6aa9e64bfee88956710a09bef124332eb6cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:49:42.415739 systemd[1]: Started sshd@11-10.0.0.19:22-10.0.0.1:49690.service - OpenSSH per-connection server daemon (10.0.0.1:49690). Sep 8 23:49:42.433493 containerd[1512]: time="2025-09-08T23:49:42.433415585Z" level=info msg="CreateContainer within sandbox \"6509d970df12e75ccd335f3d91b4dec1e51310f9f156d320a63c3f9ee98f2001\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:49:42.480471 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 49690 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:49:42.482505 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:49:42.487134 systemd-logind[1487]: New session 12 of user core. Sep 8 23:49:42.493445 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 8 23:49:42.788110 sshd[4077]: Connection closed by 10.0.0.1 port 49690 Sep 8 23:49:42.788465 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Sep 8 23:49:42.792715 systemd[1]: sshd@11-10.0.0.19:22-10.0.0.1:49690.service: Deactivated successfully. Sep 8 23:49:42.795422 systemd[1]: session-12.scope: Deactivated successfully. Sep 8 23:49:42.796165 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Sep 8 23:49:42.797094 systemd-logind[1487]: Removed session 12. Sep 8 23:49:42.880103 containerd[1512]: time="2025-09-08T23:49:42.880049726Z" level=info msg="CreateContainer within sandbox \"292295f9aaf08f817901b0a749cc6aa9e64bfee88956710a09bef124332eb6cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27227eaf059fb897c0db8b32f220893c29b5f8368065463a98cb64c5290b1c0a\"" Sep 8 23:49:42.881598 containerd[1512]: time="2025-09-08T23:49:42.880646404Z" level=info msg="StartContainer for \"27227eaf059fb897c0db8b32f220893c29b5f8368065463a98cb64c5290b1c0a\"" Sep 8 23:49:42.882009 containerd[1512]: time="2025-09-08T23:49:42.881978955Z" level=info msg="CreateContainer within sandbox \"6509d970df12e75ccd335f3d91b4dec1e51310f9f156d320a63c3f9ee98f2001\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce7635704aec97cb6c00285e99281262d99d85e5eadab5693688238d831ef760\"" Sep 8 23:49:42.882542 containerd[1512]: time="2025-09-08T23:49:42.882503003Z" level=info msg="StartContainer for \"ce7635704aec97cb6c00285e99281262d99d85e5eadab5693688238d831ef760\"" Sep 8 23:49:42.913553 systemd[1]: Started cri-containerd-27227eaf059fb897c0db8b32f220893c29b5f8368065463a98cb64c5290b1c0a.scope - libcontainer container 27227eaf059fb897c0db8b32f220893c29b5f8368065463a98cb64c5290b1c0a. Sep 8 23:49:42.917479 systemd[1]: Started cri-containerd-ce7635704aec97cb6c00285e99281262d99d85e5eadab5693688238d831ef760.scope - libcontainer container ce7635704aec97cb6c00285e99281262d99d85e5eadab5693688238d831ef760. Sep 8 23:49:42.958705 containerd[1512]: time="2025-09-08T23:49:42.958648946Z" level=info msg="StartContainer for \"ce7635704aec97cb6c00285e99281262d99d85e5eadab5693688238d831ef760\" returns successfully" Sep 8 23:49:42.958871 containerd[1512]: time="2025-09-08T23:49:42.958649006Z" level=info msg="StartContainer for \"27227eaf059fb897c0db8b32f220893c29b5f8368065463a98cb64c5290b1c0a\" returns successfully" Sep 8 23:49:42.964150 kubelet[2711]: E0908 23:49:42.964105 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:42.966444 kubelet[2711]: E0908 23:49:42.966411 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:42.976750 kubelet[2711]: I0908 23:49:42.976394 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xvqgc" podStartSLOduration=42.976378654 podStartE2EDuration="42.976378654s" podCreationTimestamp="2025-09-08 23:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:49:42.976152102 +0000 UTC m=+48.319442297" watchObservedRunningTime="2025-09-08 23:49:42.976378654 +0000 UTC m=+48.319668829" Sep 8 23:49:42.994003 kubelet[2711]: I0908 23:49:42.993876 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cvznj" podStartSLOduration=43.993854048 podStartE2EDuration="43.993854048s" podCreationTimestamp="2025-09-08 23:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:49:42.993660348 +0000 UTC m=+48.336950523" watchObservedRunningTime="2025-09-08 23:49:42.993854048 +0000 UTC m=+48.337144223" Sep 8 23:49:43.212526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3613949828.mount: Deactivated successfully. Sep 8 23:49:43.968999 kubelet[2711]: E0908 23:49:43.968642 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:43.969836 kubelet[2711]: E0908 23:49:43.969285 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:44.970703 kubelet[2711]: E0908 23:49:44.970646 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:44.971276 kubelet[2711]: E0908 23:49:44.970720 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:47.802846 systemd[1]: Started sshd@12-10.0.0.19:22-10.0.0.1:49702.service - OpenSSH per-connection server daemon (10.0.0.1:49702). Sep 8 23:49:47.854537 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 49702 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:49:47.856345 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:49:47.861771 systemd-logind[1487]: New session 13 of user core. Sep 8 23:49:47.872592 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 8 23:49:48.099328 sshd[4200]: Connection closed by 10.0.0.1 port 49702 Sep 8 23:49:48.099710 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Sep 8 23:49:48.105544 systemd[1]: sshd@12-10.0.0.19:22-10.0.0.1:49702.service: Deactivated successfully. Sep 8 23:49:48.108812 systemd[1]: session-13.scope: Deactivated successfully. Sep 8 23:49:48.109777 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Sep 8 23:49:48.111026 systemd-logind[1487]: Removed session 13. Sep 8 23:49:53.132804 systemd[1]: Started sshd@13-10.0.0.19:22-10.0.0.1:50324.service - OpenSSH per-connection server daemon (10.0.0.1:50324). Sep 8 23:49:53.177426 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 50324 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:49:53.179419 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:49:53.184621 systemd-logind[1487]: New session 14 of user core. Sep 8 23:49:53.197490 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 8 23:49:53.316736 sshd[4216]: Connection closed by 10.0.0.1 port 50324 Sep 8 23:49:53.317189 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Sep 8 23:49:53.336175 systemd[1]: sshd@13-10.0.0.19:22-10.0.0.1:50324.service: Deactivated successfully. Sep 8 23:49:53.339580 systemd[1]: session-14.scope: Deactivated successfully. Sep 8 23:49:53.341905 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Sep 8 23:49:53.352629 systemd[1]: Started sshd@14-10.0.0.19:22-10.0.0.1:50336.service - OpenSSH per-connection server daemon (10.0.0.1:50336). Sep 8 23:49:53.353721 systemd-logind[1487]: Removed session 14. Sep 8 23:49:53.393555 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 50336 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:49:53.395095 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:49:53.401270 systemd-logind[1487]: New session 15 of user core. Sep 8 23:49:53.415547 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 8 23:49:53.752557 sshd[4232]: Connection closed by 10.0.0.1 port 50336 Sep 8 23:49:53.753126 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Sep 8 23:49:53.762837 systemd[1]: sshd@14-10.0.0.19:22-10.0.0.1:50336.service: Deactivated successfully. Sep 8 23:49:53.765716 systemd[1]: session-15.scope: Deactivated successfully. Sep 8 23:49:53.768255 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Sep 8 23:49:53.779748 systemd[1]: Started sshd@15-10.0.0.19:22-10.0.0.1:50350.service - OpenSSH per-connection server daemon (10.0.0.1:50350). Sep 8 23:49:53.781792 systemd-logind[1487]: Removed session 15. Sep 8 23:49:53.822019 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 50350 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:49:53.823817 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:49:53.829076 systemd-logind[1487]: New session 16 of user core. Sep 8 23:49:53.839446 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 8 23:49:54.044431 sshd[4245]: Connection closed by 10.0.0.1 port 50350 Sep 8 23:49:54.044869 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Sep 8 23:49:54.048906 systemd[1]: sshd@15-10.0.0.19:22-10.0.0.1:50350.service: Deactivated successfully. Sep 8 23:49:54.051272 systemd[1]: session-16.scope: Deactivated successfully. Sep 8 23:49:54.051967 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Sep 8 23:49:54.053038 systemd-logind[1487]: Removed session 16. Sep 8 23:49:59.060148 systemd[1]: Started sshd@16-10.0.0.19:22-10.0.0.1:50352.service - OpenSSH per-connection server daemon (10.0.0.1:50352). Sep 8 23:49:59.105513 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 50352 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:49:59.107551 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:49:59.112525 systemd-logind[1487]: New session 17 of user core. Sep 8 23:49:59.120531 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 8 23:49:59.247852 sshd[4263]: Connection closed by 10.0.0.1 port 50352 Sep 8 23:49:59.248321 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Sep 8 23:49:59.253540 systemd[1]: sshd@16-10.0.0.19:22-10.0.0.1:50352.service: Deactivated successfully. Sep 8 23:49:59.257090 systemd[1]: session-17.scope: Deactivated successfully. Sep 8 23:49:59.258018 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Sep 8 23:49:59.259357 systemd-logind[1487]: Removed session 17. Sep 8 23:50:04.268586 systemd[1]: Started sshd@17-10.0.0.19:22-10.0.0.1:44318.service - OpenSSH per-connection server daemon (10.0.0.1:44318). Sep 8 23:50:04.307465 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 44318 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:04.309494 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:04.314619 systemd-logind[1487]: New session 18 of user core. Sep 8 23:50:04.324444 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 8 23:50:04.499231 sshd[4280]: Connection closed by 10.0.0.1 port 44318 Sep 8 23:50:04.499676 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:04.504253 systemd[1]: sshd@17-10.0.0.19:22-10.0.0.1:44318.service: Deactivated successfully. Sep 8 23:50:04.506624 systemd[1]: session-18.scope: Deactivated successfully. Sep 8 23:50:04.507592 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Sep 8 23:50:04.508744 systemd-logind[1487]: Removed session 18. Sep 8 23:50:07.775212 kubelet[2711]: E0908 23:50:07.775121 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:09.513817 systemd[1]: Started sshd@18-10.0.0.19:22-10.0.0.1:44334.service - OpenSSH per-connection server daemon (10.0.0.1:44334). Sep 8 23:50:09.555650 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 44334 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:09.557496 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:09.562443 systemd-logind[1487]: New session 19 of user core. Sep 8 23:50:09.568587 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 8 23:50:09.739603 sshd[4295]: Connection closed by 10.0.0.1 port 44334 Sep 8 23:50:09.739997 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:09.744198 systemd[1]: sshd@18-10.0.0.19:22-10.0.0.1:44334.service: Deactivated successfully. Sep 8 23:50:09.746556 systemd[1]: session-19.scope: Deactivated successfully. Sep 8 23:50:09.747328 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Sep 8 23:50:09.748443 systemd-logind[1487]: Removed session 19. Sep 8 23:50:14.763235 systemd[1]: Started sshd@19-10.0.0.19:22-10.0.0.1:40034.service - OpenSSH per-connection server daemon (10.0.0.1:40034). Sep 8 23:50:14.775157 kubelet[2711]: E0908 23:50:14.775023 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:14.808912 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 40034 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:14.810916 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:14.816325 systemd-logind[1487]: New session 20 of user core. Sep 8 23:50:14.832637 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 8 23:50:14.959244 sshd[4311]: Connection closed by 10.0.0.1 port 40034 Sep 8 23:50:14.959833 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:14.969951 systemd[1]: sshd@19-10.0.0.19:22-10.0.0.1:40034.service: Deactivated successfully. Sep 8 23:50:14.972335 systemd[1]: session-20.scope: Deactivated successfully. Sep 8 23:50:14.973987 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Sep 8 23:50:14.986929 systemd[1]: Started sshd@20-10.0.0.19:22-10.0.0.1:40048.service - OpenSSH per-connection server daemon (10.0.0.1:40048). Sep 8 23:50:14.988741 systemd-logind[1487]: Removed session 20. Sep 8 23:50:15.027076 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 40048 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:15.028777 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:15.034865 systemd-logind[1487]: New session 21 of user core. Sep 8 23:50:15.051575 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 8 23:50:15.975345 sshd[4327]: Connection closed by 10.0.0.1 port 40048 Sep 8 23:50:15.975950 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:15.990162 systemd[1]: sshd@20-10.0.0.19:22-10.0.0.1:40048.service: Deactivated successfully. Sep 8 23:50:15.993387 systemd[1]: session-21.scope: Deactivated successfully. Sep 8 23:50:15.995823 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Sep 8 23:50:16.006738 systemd[1]: Started sshd@21-10.0.0.19:22-10.0.0.1:40054.service - OpenSSH per-connection server daemon (10.0.0.1:40054). Sep 8 23:50:16.008131 systemd-logind[1487]: Removed session 21. Sep 8 23:50:16.052556 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 40054 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:16.054869 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:16.060135 systemd-logind[1487]: New session 22 of user core. Sep 8 23:50:16.070450 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 8 23:50:17.772517 sshd[4341]: Connection closed by 10.0.0.1 port 40054 Sep 8 23:50:17.773060 sshd-session[4338]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:17.789256 systemd[1]: sshd@21-10.0.0.19:22-10.0.0.1:40054.service: Deactivated successfully. Sep 8 23:50:17.792520 systemd[1]: session-22.scope: Deactivated successfully. Sep 8 23:50:17.796604 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Sep 8 23:50:17.803914 systemd[1]: Started sshd@22-10.0.0.19:22-10.0.0.1:40068.service - OpenSSH per-connection server daemon (10.0.0.1:40068). Sep 8 23:50:17.805418 systemd-logind[1487]: Removed session 22. Sep 8 23:50:17.842223 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 40068 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:17.844194 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:17.849922 systemd-logind[1487]: New session 23 of user core. Sep 8 23:50:17.859465 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 8 23:50:18.610194 sshd[4365]: Connection closed by 10.0.0.1 port 40068 Sep 8 23:50:18.610831 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:18.625599 systemd[1]: sshd@22-10.0.0.19:22-10.0.0.1:40068.service: Deactivated successfully. Sep 8 23:50:18.627966 systemd[1]: session-23.scope: Deactivated successfully. Sep 8 23:50:18.630159 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Sep 8 23:50:18.640652 systemd[1]: Started sshd@23-10.0.0.19:22-10.0.0.1:40080.service - OpenSSH per-connection server daemon (10.0.0.1:40080). Sep 8 23:50:18.642466 systemd-logind[1487]: Removed session 23. Sep 8 23:50:18.683564 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 40080 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:18.685243 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:18.690162 systemd-logind[1487]: New session 24 of user core. Sep 8 23:50:18.698452 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 8 23:50:18.841798 sshd[4378]: Connection closed by 10.0.0.1 port 40080 Sep 8 23:50:18.842224 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:18.846625 systemd[1]: sshd@23-10.0.0.19:22-10.0.0.1:40080.service: Deactivated successfully. Sep 8 23:50:18.849837 systemd[1]: session-24.scope: Deactivated successfully. Sep 8 23:50:18.852062 systemd-logind[1487]: Session 24 logged out. Waiting for processes to exit. Sep 8 23:50:18.853197 systemd-logind[1487]: Removed session 24. Sep 8 23:50:23.774632 kubelet[2711]: E0908 23:50:23.774568 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:23.774632 kubelet[2711]: E0908 23:50:23.774609 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:23.856243 systemd[1]: Started sshd@24-10.0.0.19:22-10.0.0.1:37404.service - OpenSSH per-connection server daemon (10.0.0.1:37404). Sep 8 23:50:23.900962 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 37404 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:23.903120 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:23.908503 systemd-logind[1487]: New session 25 of user core. Sep 8 23:50:23.918477 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 8 23:50:24.051099 sshd[4394]: Connection closed by 10.0.0.1 port 37404 Sep 8 23:50:24.051552 sshd-session[4392]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:24.056261 systemd[1]: sshd@24-10.0.0.19:22-10.0.0.1:37404.service: Deactivated successfully. Sep 8 23:50:24.059107 systemd[1]: session-25.scope: Deactivated successfully. Sep 8 23:50:24.060148 systemd-logind[1487]: Session 25 logged out. Waiting for processes to exit. Sep 8 23:50:24.061364 systemd-logind[1487]: Removed session 25. Sep 8 23:50:29.065682 systemd[1]: Started sshd@25-10.0.0.19:22-10.0.0.1:37418.service - OpenSSH per-connection server daemon (10.0.0.1:37418). Sep 8 23:50:29.108667 sshd[4408]: Accepted publickey for core from 10.0.0.1 port 37418 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:29.110357 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:29.114639 systemd-logind[1487]: New session 26 of user core. Sep 8 23:50:29.125467 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 8 23:50:29.264933 sshd[4410]: Connection closed by 10.0.0.1 port 37418 Sep 8 23:50:29.265323 sshd-session[4408]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:29.269318 systemd[1]: sshd@25-10.0.0.19:22-10.0.0.1:37418.service: Deactivated successfully. Sep 8 23:50:29.271589 systemd[1]: session-26.scope: Deactivated successfully. Sep 8 23:50:29.272435 systemd-logind[1487]: Session 26 logged out. Waiting for processes to exit. Sep 8 23:50:29.273531 systemd-logind[1487]: Removed session 26. Sep 8 23:50:29.731151 update_engine[1489]: I20250908 23:50:29.731060 1489 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 8 23:50:29.731151 update_engine[1489]: I20250908 23:50:29.731127 1489 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 8 23:50:29.731756 update_engine[1489]: I20250908 23:50:29.731431 1489 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 8 23:50:29.732058 update_engine[1489]: I20250908 23:50:29.732025 1489 omaha_request_params.cc:62] Current group set to stable Sep 8 23:50:29.732584 update_engine[1489]: I20250908 23:50:29.732546 1489 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 8 23:50:29.732584 update_engine[1489]: I20250908 23:50:29.732569 1489 update_attempter.cc:643] Scheduling an action processor start. Sep 8 23:50:29.732676 update_engine[1489]: I20250908 23:50:29.732590 1489 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 8 23:50:29.732676 update_engine[1489]: I20250908 23:50:29.732645 1489 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 8 23:50:29.732763 update_engine[1489]: I20250908 23:50:29.732734 1489 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 8 23:50:29.732763 update_engine[1489]: I20250908 23:50:29.732748 1489 omaha_request_action.cc:272] Request: Sep 8 23:50:29.732763 update_engine[1489]: Sep 8 23:50:29.732763 update_engine[1489]: Sep 8 23:50:29.732763 update_engine[1489]: Sep 8 23:50:29.732763 update_engine[1489]: Sep 8 23:50:29.732763 update_engine[1489]: Sep 8 23:50:29.732763 update_engine[1489]: Sep 8 23:50:29.732763 update_engine[1489]: Sep 8 23:50:29.732763 update_engine[1489]: Sep 8 23:50:29.733030 update_engine[1489]: I20250908 23:50:29.732761 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 8 23:50:29.733276 locksmithd[1530]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 8 23:50:29.736314 update_engine[1489]: I20250908 23:50:29.736269 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 8 23:50:29.736644 update_engine[1489]: I20250908 23:50:29.736613 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 8 23:50:29.750075 update_engine[1489]: E20250908 23:50:29.749987 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 8 23:50:29.750317 update_engine[1489]: I20250908 23:50:29.750099 1489 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 8 23:50:34.281540 systemd[1]: Started sshd@26-10.0.0.19:22-10.0.0.1:50610.service - OpenSSH per-connection server daemon (10.0.0.1:50610). Sep 8 23:50:34.325435 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 50610 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:34.326941 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:34.331258 systemd-logind[1487]: New session 27 of user core. Sep 8 23:50:34.341453 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 8 23:50:34.548393 sshd[4429]: Connection closed by 10.0.0.1 port 50610 Sep 8 23:50:34.549125 sshd-session[4427]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:34.557034 systemd[1]: sshd@26-10.0.0.19:22-10.0.0.1:50610.service: Deactivated successfully. Sep 8 23:50:34.561864 systemd[1]: session-27.scope: Deactivated successfully. Sep 8 23:50:34.563890 systemd-logind[1487]: Session 27 logged out. Waiting for processes to exit. Sep 8 23:50:34.565442 systemd-logind[1487]: Removed session 27. Sep 8 23:50:39.604731 systemd[1]: Started sshd@27-10.0.0.19:22-10.0.0.1:50616.service - OpenSSH per-connection server daemon (10.0.0.1:50616). Sep 8 23:50:39.656316 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 50616 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:39.659566 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:39.666806 systemd-logind[1487]: New session 28 of user core. Sep 8 23:50:39.681498 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 8 23:50:39.731000 update_engine[1489]: I20250908 23:50:39.730903 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 8 23:50:39.731525 update_engine[1489]: I20250908 23:50:39.731315 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 8 23:50:39.731651 update_engine[1489]: I20250908 23:50:39.731602 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 8 23:50:39.744471 update_engine[1489]: E20250908 23:50:39.744393 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 8 23:50:39.744471 update_engine[1489]: I20250908 23:50:39.744481 1489 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 8 23:50:39.774944 kubelet[2711]: E0908 23:50:39.774277 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:39.811543 sshd[4448]: Connection closed by 10.0.0.1 port 50616 Sep 8 23:50:39.811945 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:39.816884 systemd[1]: sshd@27-10.0.0.19:22-10.0.0.1:50616.service: Deactivated successfully. Sep 8 23:50:39.819560 systemd[1]: session-28.scope: Deactivated successfully. Sep 8 23:50:39.821167 systemd-logind[1487]: Session 28 logged out. Waiting for processes to exit. Sep 8 23:50:39.822219 systemd-logind[1487]: Removed session 28. Sep 8 23:50:44.836905 systemd[1]: Started sshd@28-10.0.0.19:22-10.0.0.1:42400.service - OpenSSH per-connection server daemon (10.0.0.1:42400). Sep 8 23:50:44.887395 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 42400 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:44.889957 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:44.897459 systemd-logind[1487]: New session 29 of user core. Sep 8 23:50:44.913501 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 8 23:50:45.139820 sshd[4464]: Connection closed by 10.0.0.1 port 42400 Sep 8 23:50:45.140312 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:45.157352 systemd[1]: sshd@28-10.0.0.19:22-10.0.0.1:42400.service: Deactivated successfully. Sep 8 23:50:45.161140 systemd[1]: session-29.scope: Deactivated successfully. Sep 8 23:50:45.165010 systemd-logind[1487]: Session 29 logged out. Waiting for processes to exit. Sep 8 23:50:45.172739 systemd[1]: Started sshd@29-10.0.0.19:22-10.0.0.1:42410.service - OpenSSH per-connection server daemon (10.0.0.1:42410). Sep 8 23:50:45.174699 systemd-logind[1487]: Removed session 29. Sep 8 23:50:45.218103 sshd[4476]: Accepted publickey for core from 10.0.0.1 port 42410 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:45.220072 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:45.225605 systemd-logind[1487]: New session 30 of user core. Sep 8 23:50:45.237549 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 8 23:50:45.775306 kubelet[2711]: E0908 23:50:45.775190 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:46.757446 containerd[1512]: time="2025-09-08T23:50:46.757367410Z" level=info msg="StopContainer for \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\" with timeout 30 (s)" Sep 8 23:50:46.766469 containerd[1512]: time="2025-09-08T23:50:46.766333357Z" level=info msg="Stop container \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\" with signal terminated" Sep 8 23:50:46.796815 systemd[1]: cri-containerd-ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40.scope: Deactivated successfully. Sep 8 23:50:46.819466 containerd[1512]: time="2025-09-08T23:50:46.819378245Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:50:46.830205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40-rootfs.mount: Deactivated successfully. Sep 8 23:50:46.836883 containerd[1512]: time="2025-09-08T23:50:46.836829994Z" level=info msg="StopContainer for \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\" with timeout 2 (s)" Sep 8 23:50:46.837156 containerd[1512]: time="2025-09-08T23:50:46.837123147Z" level=info msg="Stop container \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\" with signal terminated" Sep 8 23:50:46.839939 containerd[1512]: time="2025-09-08T23:50:46.839863343Z" level=info msg="shim disconnected" id=ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40 namespace=k8s.io Sep 8 23:50:46.839994 containerd[1512]: time="2025-09-08T23:50:46.839941180Z" level=warning msg="cleaning up after shim disconnected" id=ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40 namespace=k8s.io Sep 8 23:50:46.839994 containerd[1512]: time="2025-09-08T23:50:46.839954265Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:50:46.848246 systemd-networkd[1418]: lxc_health: Link DOWN Sep 8 23:50:46.848260 systemd-networkd[1418]: lxc_health: Lost carrier Sep 8 23:50:46.868084 containerd[1512]: time="2025-09-08T23:50:46.867983059Z" level=info msg="StopContainer for \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\" returns successfully" Sep 8 23:50:46.872100 containerd[1512]: time="2025-09-08T23:50:46.872074295Z" level=info msg="StopPodSandbox for \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\"" Sep 8 23:50:46.874699 systemd[1]: cri-containerd-09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8.scope: Deactivated successfully. Sep 8 23:50:46.875687 systemd[1]: cri-containerd-09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8.scope: Consumed 7.535s CPU time, 124.7M memory peak, 220K read from disk, 13.3M written to disk. Sep 8 23:50:46.893410 containerd[1512]: time="2025-09-08T23:50:46.872112597Z" level=info msg="Container to stop \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:50:46.896530 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52-shm.mount: Deactivated successfully. Sep 8 23:50:46.903135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8-rootfs.mount: Deactivated successfully. Sep 8 23:50:46.907689 systemd[1]: cri-containerd-6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52.scope: Deactivated successfully. Sep 8 23:50:46.931935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52-rootfs.mount: Deactivated successfully. Sep 8 23:50:46.936314 containerd[1512]: time="2025-09-08T23:50:46.936225215Z" level=info msg="shim disconnected" id=09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8 namespace=k8s.io Sep 8 23:50:46.936447 containerd[1512]: time="2025-09-08T23:50:46.936319183Z" level=warning msg="cleaning up after shim disconnected" id=09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8 namespace=k8s.io Sep 8 23:50:46.936447 containerd[1512]: time="2025-09-08T23:50:46.936333970Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:50:46.938473 containerd[1512]: time="2025-09-08T23:50:46.938258680Z" level=info msg="shim disconnected" id=6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52 namespace=k8s.io Sep 8 23:50:46.938473 containerd[1512]: time="2025-09-08T23:50:46.938318493Z" level=warning msg="cleaning up after shim disconnected" id=6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52 namespace=k8s.io Sep 8 23:50:46.938473 containerd[1512]: time="2025-09-08T23:50:46.938331798Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:50:46.956650 containerd[1512]: time="2025-09-08T23:50:46.956606299Z" level=info msg="TearDown network for sandbox \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\" successfully" Sep 8 23:50:46.956650 containerd[1512]: time="2025-09-08T23:50:46.956643218Z" level=info msg="StopPodSandbox for \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\" returns successfully" Sep 8 23:50:46.957511 containerd[1512]: time="2025-09-08T23:50:46.957478363Z" level=info msg="StopContainer for \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\" returns successfully" Sep 8 23:50:46.957880 containerd[1512]: time="2025-09-08T23:50:46.957849794Z" level=info msg="StopPodSandbox for \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\"" Sep 8 23:50:46.957958 containerd[1512]: time="2025-09-08T23:50:46.957876504Z" level=info msg="Container to stop \"2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:50:46.957958 containerd[1512]: time="2025-09-08T23:50:46.957909325Z" level=info msg="Container to stop \"87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:50:46.957958 containerd[1512]: time="2025-09-08T23:50:46.957917462Z" level=info msg="Container to stop \"2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:50:46.957958 containerd[1512]: time="2025-09-08T23:50:46.957926318Z" level=info msg="Container to stop \"e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:50:46.957958 containerd[1512]: time="2025-09-08T23:50:46.957935315Z" level=info msg="Container to stop \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:50:46.966587 systemd[1]: cri-containerd-7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce.scope: Deactivated successfully. Sep 8 23:50:47.006288 containerd[1512]: time="2025-09-08T23:50:47.006183988Z" level=info msg="shim disconnected" id=7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce namespace=k8s.io Sep 8 23:50:47.006288 containerd[1512]: time="2025-09-08T23:50:47.006260041Z" level=warning msg="cleaning up after shim disconnected" id=7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce namespace=k8s.io Sep 8 23:50:47.006288 containerd[1512]: time="2025-09-08T23:50:47.006269690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:50:47.021457 containerd[1512]: time="2025-09-08T23:50:47.021322073Z" level=info msg="TearDown network for sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" successfully" Sep 8 23:50:47.021457 containerd[1512]: time="2025-09-08T23:50:47.021390442Z" level=info msg="StopPodSandbox for \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" returns successfully" Sep 8 23:50:47.025816 kubelet[2711]: I0908 23:50:47.025782 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b473829b-4236-4ab3-99ad-eaadd6471914-cilium-config-path\") pod \"b473829b-4236-4ab3-99ad-eaadd6471914\" (UID: \"b473829b-4236-4ab3-99ad-eaadd6471914\") " Sep 8 23:50:47.026372 kubelet[2711]: I0908 23:50:47.025847 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxxz5\" (UniqueName: \"kubernetes.io/projected/b473829b-4236-4ab3-99ad-eaadd6471914-kube-api-access-cxxz5\") pod \"b473829b-4236-4ab3-99ad-eaadd6471914\" (UID: \"b473829b-4236-4ab3-99ad-eaadd6471914\") " Sep 8 23:50:47.026996 kubelet[2711]: I0908 23:50:47.026837 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b473829b-4236-4ab3-99ad-eaadd6471914-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b473829b-4236-4ab3-99ad-eaadd6471914" (UID: "b473829b-4236-4ab3-99ad-eaadd6471914"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:50:47.031677 kubelet[2711]: I0908 23:50:47.031611 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b473829b-4236-4ab3-99ad-eaadd6471914-kube-api-access-cxxz5" (OuterVolumeSpecName: "kube-api-access-cxxz5") pod "b473829b-4236-4ab3-99ad-eaadd6471914" (UID: "b473829b-4236-4ab3-99ad-eaadd6471914"). InnerVolumeSpecName "kube-api-access-cxxz5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:50:47.117400 kubelet[2711]: I0908 23:50:47.117360 2711 scope.go:117] "RemoveContainer" containerID="ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40" Sep 8 23:50:47.125218 systemd[1]: Removed slice kubepods-besteffort-podb473829b_4236_4ab3_99ad_eaadd6471914.slice - libcontainer container kubepods-besteffort-podb473829b_4236_4ab3_99ad_eaadd6471914.slice. Sep 8 23:50:47.125500 containerd[1512]: time="2025-09-08T23:50:47.125230806Z" level=info msg="RemoveContainer for \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\"" Sep 8 23:50:47.126194 kubelet[2711]: I0908 23:50:47.126153 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-etc-cni-netd\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.126287 kubelet[2711]: I0908 23:50:47.126199 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79bjv\" (UniqueName: \"kubernetes.io/projected/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-kube-api-access-79bjv\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.126287 kubelet[2711]: I0908 23:50:47.126233 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cni-path\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.126287 kubelet[2711]: I0908 23:50:47.126262 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cilium-config-path\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.126287 kubelet[2711]: I0908 23:50:47.126282 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-hostproc\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.126472 kubelet[2711]: I0908 23:50:47.126327 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-host-proc-sys-kernel\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.126472 kubelet[2711]: I0908 23:50:47.126377 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-host-proc-sys-net\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.126472 kubelet[2711]: I0908 23:50:47.126398 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-bpf-maps\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.126472 kubelet[2711]: I0908 23:50:47.126421 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-clustermesh-secrets\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.126472 kubelet[2711]: I0908 23:50:47.126442 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-xtables-lock\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.126780 kubelet[2711]: I0908 23:50:47.126741 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-hostproc" (OuterVolumeSpecName: "hostproc") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:47.126858 kubelet[2711]: I0908 23:50:47.126807 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:47.126858 kubelet[2711]: I0908 23:50:47.126846 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cni-path" (OuterVolumeSpecName: "cni-path") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:47.126915 kubelet[2711]: I0908 23:50:47.126878 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:47.126952 kubelet[2711]: I0908 23:50:47.126909 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:47.127040 kubelet[2711]: I0908 23:50:47.127022 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:47.127040 kubelet[2711]: I0908 23:50:47.127027 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:47.127212 kubelet[2711]: I0908 23:50:47.127193 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-lib-modules\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.127247 kubelet[2711]: I0908 23:50:47.127223 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cilium-cgroup\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.127284 kubelet[2711]: I0908 23:50:47.127245 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cilium-run\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.127284 kubelet[2711]: I0908 23:50:47.127267 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-hubble-tls\") pod \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\" (UID: \"7cb5ce2e-2283-4545-9a13-59b22f04c5ea\") " Sep 8 23:50:47.127499 kubelet[2711]: I0908 23:50:47.127351 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:47.127878 kubelet[2711]: I0908 23:50:47.127595 2711 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.127878 kubelet[2711]: I0908 23:50:47.127612 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:47.127878 kubelet[2711]: I0908 23:50:47.127627 2711 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.127878 kubelet[2711]: I0908 23:50:47.127641 2711 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.127878 kubelet[2711]: I0908 23:50:47.127652 2711 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.127878 kubelet[2711]: I0908 23:50:47.127665 2711 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cxxz5\" (UniqueName: \"kubernetes.io/projected/b473829b-4236-4ab3-99ad-eaadd6471914-kube-api-access-cxxz5\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.127878 kubelet[2711]: I0908 23:50:47.127677 2711 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.128348 kubelet[2711]: I0908 23:50:47.127689 2711 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b473829b-4236-4ab3-99ad-eaadd6471914-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.128348 kubelet[2711]: I0908 23:50:47.127701 2711 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.128348 kubelet[2711]: I0908 23:50:47.127712 2711 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.128348 kubelet[2711]: I0908 23:50:47.127740 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:47.132032 kubelet[2711]: I0908 23:50:47.131992 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-kube-api-access-79bjv" (OuterVolumeSpecName: "kube-api-access-79bjv") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "kube-api-access-79bjv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:50:47.133169 kubelet[2711]: I0908 23:50:47.133137 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:50:47.137286 containerd[1512]: time="2025-09-08T23:50:47.137228709Z" level=info msg="RemoveContainer for \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\" returns successfully" Sep 8 23:50:47.137627 kubelet[2711]: I0908 23:50:47.137598 2711 scope.go:117] "RemoveContainer" containerID="ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40" Sep 8 23:50:47.138038 containerd[1512]: time="2025-09-08T23:50:47.137974095Z" level=error msg="ContainerStatus for \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\": not found" Sep 8 23:50:47.138252 kubelet[2711]: E0908 23:50:47.138224 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\": not found" containerID="ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40" Sep 8 23:50:47.138348 kubelet[2711]: I0908 23:50:47.138267 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40"} err="failed to get container status \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec3e4c3c953e8db75de71365a7c30940a32c405b1d1a99ac104d2d2917612c40\": not found" Sep 8 23:50:47.138401 kubelet[2711]: I0908 23:50:47.138353 2711 scope.go:117] "RemoveContainer" containerID="09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8" Sep 8 23:50:47.139129 kubelet[2711]: I0908 23:50:47.138970 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:50:47.139505 containerd[1512]: time="2025-09-08T23:50:47.139479473Z" level=info msg="RemoveContainer for \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\"" Sep 8 23:50:47.146759 containerd[1512]: time="2025-09-08T23:50:47.146716800Z" level=info msg="RemoveContainer for \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\" returns successfully" Sep 8 23:50:47.146961 kubelet[2711]: I0908 23:50:47.146928 2711 scope.go:117] "RemoveContainer" containerID="87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4" Sep 8 23:50:47.148108 containerd[1512]: time="2025-09-08T23:50:47.148068770Z" level=info msg="RemoveContainer for \"87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4\"" Sep 8 23:50:47.151185 kubelet[2711]: I0908 23:50:47.151141 2711 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7cb5ce2e-2283-4545-9a13-59b22f04c5ea" (UID: "7cb5ce2e-2283-4545-9a13-59b22f04c5ea"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 8 23:50:47.154357 containerd[1512]: time="2025-09-08T23:50:47.154308796Z" level=info msg="RemoveContainer for \"87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4\" returns successfully" Sep 8 23:50:47.154541 kubelet[2711]: I0908 23:50:47.154504 2711 scope.go:117] "RemoveContainer" containerID="e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee" Sep 8 23:50:47.155590 containerd[1512]: time="2025-09-08T23:50:47.155561057Z" level=info msg="RemoveContainer for \"e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee\"" Sep 8 23:50:47.162021 containerd[1512]: time="2025-09-08T23:50:47.161972397Z" level=info msg="RemoveContainer for \"e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee\" returns successfully" Sep 8 23:50:47.162280 kubelet[2711]: I0908 23:50:47.162252 2711 scope.go:117] "RemoveContainer" containerID="2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02" Sep 8 23:50:47.163425 containerd[1512]: time="2025-09-08T23:50:47.163401702Z" level=info msg="RemoveContainer for \"2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02\"" Sep 8 23:50:47.170663 containerd[1512]: time="2025-09-08T23:50:47.170615094Z" level=info msg="RemoveContainer for \"2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02\" returns successfully" Sep 8 23:50:47.171048 kubelet[2711]: I0908 23:50:47.170905 2711 scope.go:117] "RemoveContainer" containerID="2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6" Sep 8 23:50:47.172385 containerd[1512]: time="2025-09-08T23:50:47.172345547Z" level=info msg="RemoveContainer for \"2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6\"" Sep 8 23:50:47.177453 containerd[1512]: time="2025-09-08T23:50:47.177408724Z" level=info msg="RemoveContainer for \"2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6\" returns successfully" Sep 8 23:50:47.177625 kubelet[2711]: I0908 23:50:47.177597 2711 scope.go:117] "RemoveContainer" containerID="09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8" Sep 8 23:50:47.177849 containerd[1512]: time="2025-09-08T23:50:47.177790604Z" level=error msg="ContainerStatus for \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\": not found" Sep 8 23:50:47.177979 kubelet[2711]: E0908 23:50:47.177954 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\": not found" containerID="09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8" Sep 8 23:50:47.178031 kubelet[2711]: I0908 23:50:47.177984 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8"} err="failed to get container status \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\": rpc error: code = NotFound desc = an error occurred when try to find container \"09fca61e3bfc4920575bf36e30001c7359b6b7d122a3683264abca48739eabf8\": not found" Sep 8 23:50:47.178031 kubelet[2711]: I0908 23:50:47.178014 2711 scope.go:117] "RemoveContainer" containerID="87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4" Sep 8 23:50:47.178195 containerd[1512]: time="2025-09-08T23:50:47.178155662Z" level=error msg="ContainerStatus for \"87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4\": not found" Sep 8 23:50:47.178363 kubelet[2711]: E0908 23:50:47.178327 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4\": not found" containerID="87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4" Sep 8 23:50:47.178435 kubelet[2711]: I0908 23:50:47.178377 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4"} err="failed to get container status \"87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"87f75b01f242db99e9bb5102acf9ff9229f57126911876fd9d96618af6e5c7b4\": not found" Sep 8 23:50:47.178435 kubelet[2711]: I0908 23:50:47.178409 2711 scope.go:117] "RemoveContainer" containerID="e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee" Sep 8 23:50:47.178610 containerd[1512]: time="2025-09-08T23:50:47.178582948Z" level=error msg="ContainerStatus for \"e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee\": not found" Sep 8 23:50:47.178717 kubelet[2711]: E0908 23:50:47.178690 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee\": not found" containerID="e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee" Sep 8 23:50:47.178766 kubelet[2711]: I0908 23:50:47.178727 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee"} err="failed to get container status \"e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee\": rpc error: code = NotFound desc = an error occurred when try to find container \"e406fa8e6d5e7de795f1f32815ae837e92e179039384d39e3c17c3dc4e8f3cee\": not found" Sep 8 23:50:47.178766 kubelet[2711]: I0908 23:50:47.178746 2711 scope.go:117] "RemoveContainer" containerID="2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02" Sep 8 23:50:47.178929 containerd[1512]: time="2025-09-08T23:50:47.178905025Z" level=error msg="ContainerStatus for \"2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02\": not found" Sep 8 23:50:47.179021 kubelet[2711]: E0908 23:50:47.179002 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02\": not found" containerID="2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02" Sep 8 23:50:47.179057 kubelet[2711]: I0908 23:50:47.179024 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02"} err="failed to get container status \"2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f6a63ad48119e057cdfa8cde4350a0b52f1e8a66532716e3429b0ce7b8c3e02\": not found" Sep 8 23:50:47.179057 kubelet[2711]: I0908 23:50:47.179038 2711 scope.go:117] "RemoveContainer" containerID="2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6" Sep 8 23:50:47.179255 containerd[1512]: time="2025-09-08T23:50:47.179218416Z" level=error msg="ContainerStatus for \"2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6\": not found" Sep 8 23:50:47.179407 kubelet[2711]: E0908 23:50:47.179371 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6\": not found" containerID="2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6" Sep 8 23:50:47.179407 kubelet[2711]: I0908 23:50:47.179405 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6"} err="failed to get container status \"2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"2dfafce4ea117bc17dd84641228d963f94dd8eadc66f81c2027a2ee80ea380d6\": not found" Sep 8 23:50:47.228063 kubelet[2711]: I0908 23:50:47.227992 2711 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.228063 kubelet[2711]: I0908 23:50:47.228045 2711 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.228063 kubelet[2711]: I0908 23:50:47.228059 2711 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.228063 kubelet[2711]: I0908 23:50:47.228072 2711 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.228063 kubelet[2711]: I0908 23:50:47.228084 2711 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.228409 kubelet[2711]: I0908 23:50:47.228098 2711 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-79bjv\" (UniqueName: \"kubernetes.io/projected/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-kube-api-access-79bjv\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.228409 kubelet[2711]: I0908 23:50:47.228117 2711 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cb5ce2e-2283-4545-9a13-59b22f04c5ea-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:47.430567 systemd[1]: Removed slice kubepods-burstable-pod7cb5ce2e_2283_4545_9a13_59b22f04c5ea.slice - libcontainer container kubepods-burstable-pod7cb5ce2e_2283_4545_9a13_59b22f04c5ea.slice. Sep 8 23:50:47.430665 systemd[1]: kubepods-burstable-pod7cb5ce2e_2283_4545_9a13_59b22f04c5ea.slice: Consumed 7.659s CPU time, 125M memory peak, 240K read from disk, 13.3M written to disk. Sep 8 23:50:47.772401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce-rootfs.mount: Deactivated successfully. Sep 8 23:50:47.772568 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce-shm.mount: Deactivated successfully. Sep 8 23:50:47.772669 systemd[1]: var-lib-kubelet-pods-b473829b\x2d4236\x2d4ab3\x2d99ad\x2deaadd6471914-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcxxz5.mount: Deactivated successfully. Sep 8 23:50:47.772770 systemd[1]: var-lib-kubelet-pods-7cb5ce2e\x2d2283\x2d4545\x2d9a13\x2d59b22f04c5ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d79bjv.mount: Deactivated successfully. Sep 8 23:50:47.772863 systemd[1]: var-lib-kubelet-pods-7cb5ce2e\x2d2283\x2d4545\x2d9a13\x2d59b22f04c5ea-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 8 23:50:47.772949 systemd[1]: var-lib-kubelet-pods-7cb5ce2e\x2d2283\x2d4545\x2d9a13\x2d59b22f04c5ea-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 8 23:50:47.774966 kubelet[2711]: E0908 23:50:47.774920 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:48.776998 kubelet[2711]: I0908 23:50:48.776925 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb5ce2e-2283-4545-9a13-59b22f04c5ea" path="/var/lib/kubelet/pods/7cb5ce2e-2283-4545-9a13-59b22f04c5ea/volumes" Sep 8 23:50:48.778125 kubelet[2711]: I0908 23:50:48.778091 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b473829b-4236-4ab3-99ad-eaadd6471914" path="/var/lib/kubelet/pods/b473829b-4236-4ab3-99ad-eaadd6471914/volumes" Sep 8 23:50:48.788308 sshd[4479]: Connection closed by 10.0.0.1 port 42410 Sep 8 23:50:48.789116 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:48.805422 systemd[1]: sshd@29-10.0.0.19:22-10.0.0.1:42410.service: Deactivated successfully. Sep 8 23:50:48.808670 systemd[1]: session-30.scope: Deactivated successfully. Sep 8 23:50:48.810785 systemd-logind[1487]: Session 30 logged out. Waiting for processes to exit. Sep 8 23:50:48.816643 systemd[1]: Started sshd@30-10.0.0.19:22-10.0.0.1:42412.service - OpenSSH per-connection server daemon (10.0.0.1:42412). Sep 8 23:50:48.817869 systemd-logind[1487]: Removed session 30. Sep 8 23:50:48.861576 sshd[4643]: Accepted publickey for core from 10.0.0.1 port 42412 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:48.863388 sshd-session[4643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:48.868640 systemd-logind[1487]: New session 31 of user core. Sep 8 23:50:48.888660 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 8 23:50:49.521943 sshd[4646]: Connection closed by 10.0.0.1 port 42412 Sep 8 23:50:49.525125 sshd-session[4643]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:49.550768 systemd[1]: sshd@30-10.0.0.19:22-10.0.0.1:42412.service: Deactivated successfully. Sep 8 23:50:49.555232 systemd[1]: session-31.scope: Deactivated successfully. Sep 8 23:50:49.556956 systemd-logind[1487]: Session 31 logged out. Waiting for processes to exit. Sep 8 23:50:49.567610 systemd[1]: Started sshd@31-10.0.0.19:22-10.0.0.1:42422.service - OpenSSH per-connection server daemon (10.0.0.1:42422). Sep 8 23:50:49.572188 systemd-logind[1487]: Removed session 31. Sep 8 23:50:49.610372 systemd[1]: Created slice kubepods-burstable-poda7c6cf84_5b81_421c_b808_5e84dfb723cd.slice - libcontainer container kubepods-burstable-poda7c6cf84_5b81_421c_b808_5e84dfb723cd.slice. Sep 8 23:50:49.644581 kubelet[2711]: I0908 23:50:49.644469 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7c6cf84-5b81-421c-b808-5e84dfb723cd-hostproc\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644581 kubelet[2711]: I0908 23:50:49.644583 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7c6cf84-5b81-421c-b808-5e84dfb723cd-cilium-cgroup\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644801 kubelet[2711]: I0908 23:50:49.644608 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7c6cf84-5b81-421c-b808-5e84dfb723cd-host-proc-sys-kernel\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644801 kubelet[2711]: I0908 23:50:49.644624 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7c6cf84-5b81-421c-b808-5e84dfb723cd-bpf-maps\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644801 kubelet[2711]: I0908 23:50:49.644639 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7c6cf84-5b81-421c-b808-5e84dfb723cd-lib-modules\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644801 kubelet[2711]: I0908 23:50:49.644655 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7c6cf84-5b81-421c-b808-5e84dfb723cd-clustermesh-secrets\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644801 kubelet[2711]: I0908 23:50:49.644669 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7c6cf84-5b81-421c-b808-5e84dfb723cd-host-proc-sys-net\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644801 kubelet[2711]: I0908 23:50:49.644688 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7c6cf84-5b81-421c-b808-5e84dfb723cd-hubble-tls\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644939 kubelet[2711]: I0908 23:50:49.644702 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7c6cf84-5b81-421c-b808-5e84dfb723cd-cilium-run\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644939 kubelet[2711]: I0908 23:50:49.644719 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a7c6cf84-5b81-421c-b808-5e84dfb723cd-cilium-ipsec-secrets\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644939 kubelet[2711]: I0908 23:50:49.644735 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7c6cf84-5b81-421c-b808-5e84dfb723cd-cni-path\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644939 kubelet[2711]: I0908 23:50:49.644752 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7c6cf84-5b81-421c-b808-5e84dfb723cd-cilium-config-path\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644939 kubelet[2711]: I0908 23:50:49.644768 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxwzt\" (UniqueName: \"kubernetes.io/projected/a7c6cf84-5b81-421c-b808-5e84dfb723cd-kube-api-access-xxwzt\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.644939 kubelet[2711]: I0908 23:50:49.644795 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7c6cf84-5b81-421c-b808-5e84dfb723cd-etc-cni-netd\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.645090 kubelet[2711]: I0908 23:50:49.644815 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7c6cf84-5b81-421c-b808-5e84dfb723cd-xtables-lock\") pod \"cilium-msbft\" (UID: \"a7c6cf84-5b81-421c-b808-5e84dfb723cd\") " pod="kube-system/cilium-msbft" Sep 8 23:50:49.646946 sshd[4657]: Accepted publickey for core from 10.0.0.1 port 42422 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:49.649114 sshd-session[4657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:49.656187 systemd-logind[1487]: New session 32 of user core. Sep 8 23:50:49.663534 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 8 23:50:49.718188 sshd[4660]: Connection closed by 10.0.0.1 port 42422 Sep 8 23:50:49.718600 sshd-session[4657]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:49.731399 update_engine[1489]: I20250908 23:50:49.731303 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 8 23:50:49.731969 update_engine[1489]: I20250908 23:50:49.731674 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 8 23:50:49.731468 systemd[1]: sshd@31-10.0.0.19:22-10.0.0.1:42422.service: Deactivated successfully. Sep 8 23:50:49.732074 update_engine[1489]: I20250908 23:50:49.732039 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 8 23:50:49.734269 systemd[1]: session-32.scope: Deactivated successfully. Sep 8 23:50:49.736356 systemd-logind[1487]: Session 32 logged out. Waiting for processes to exit. Sep 8 23:50:49.738531 update_engine[1489]: E20250908 23:50:49.738471 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 8 23:50:49.738586 update_engine[1489]: I20250908 23:50:49.738559 1489 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 8 23:50:49.745774 systemd[1]: Started sshd@32-10.0.0.19:22-10.0.0.1:42438.service - OpenSSH per-connection server daemon (10.0.0.1:42438). Sep 8 23:50:49.748417 systemd-logind[1487]: Removed session 32. Sep 8 23:50:49.797180 sshd[4666]: Accepted publickey for core from 10.0.0.1 port 42438 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:50:49.799001 sshd-session[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:49.804261 systemd-logind[1487]: New session 33 of user core. Sep 8 23:50:49.811552 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 8 23:50:49.843011 kubelet[2711]: E0908 23:50:49.842956 2711 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:50:49.920841 kubelet[2711]: E0908 23:50:49.920782 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:49.922767 containerd[1512]: time="2025-09-08T23:50:49.922717042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-msbft,Uid:a7c6cf84-5b81-421c-b808-5e84dfb723cd,Namespace:kube-system,Attempt:0,}" Sep 8 23:50:50.029961 containerd[1512]: time="2025-09-08T23:50:50.029779227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:50:50.029961 containerd[1512]: time="2025-09-08T23:50:50.029878575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:50:50.029961 containerd[1512]: time="2025-09-08T23:50:50.029895067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:50:50.030198 containerd[1512]: time="2025-09-08T23:50:50.030008761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:50:50.052491 systemd[1]: Started cri-containerd-f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3.scope - libcontainer container f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3. Sep 8 23:50:50.080024 containerd[1512]: time="2025-09-08T23:50:50.079932373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-msbft,Uid:a7c6cf84-5b81-421c-b808-5e84dfb723cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3\"" Sep 8 23:50:50.081167 kubelet[2711]: E0908 23:50:50.081109 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:50.094285 containerd[1512]: time="2025-09-08T23:50:50.094215883Z" level=info msg="CreateContainer within sandbox \"f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:50:50.157637 containerd[1512]: time="2025-09-08T23:50:50.157567513Z" level=info msg="CreateContainer within sandbox \"f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"93047dfc405eb28441bb3759c1d341bc9082c65275ee610c61766399b9dcfb48\"" Sep 8 23:50:50.158202 containerd[1512]: time="2025-09-08T23:50:50.158165250Z" level=info msg="StartContainer for \"93047dfc405eb28441bb3759c1d341bc9082c65275ee610c61766399b9dcfb48\"" Sep 8 23:50:50.191533 systemd[1]: Started cri-containerd-93047dfc405eb28441bb3759c1d341bc9082c65275ee610c61766399b9dcfb48.scope - libcontainer container 93047dfc405eb28441bb3759c1d341bc9082c65275ee610c61766399b9dcfb48. Sep 8 23:50:50.241413 systemd[1]: cri-containerd-93047dfc405eb28441bb3759c1d341bc9082c65275ee610c61766399b9dcfb48.scope: Deactivated successfully. Sep 8 23:50:50.251845 containerd[1512]: time="2025-09-08T23:50:50.251740320Z" level=info msg="StartContainer for \"93047dfc405eb28441bb3759c1d341bc9082c65275ee610c61766399b9dcfb48\" returns successfully" Sep 8 23:50:50.344699 containerd[1512]: time="2025-09-08T23:50:50.344509632Z" level=info msg="shim disconnected" id=93047dfc405eb28441bb3759c1d341bc9082c65275ee610c61766399b9dcfb48 namespace=k8s.io Sep 8 23:50:50.344699 containerd[1512]: time="2025-09-08T23:50:50.344589754Z" level=warning msg="cleaning up after shim disconnected" id=93047dfc405eb28441bb3759c1d341bc9082c65275ee610c61766399b9dcfb48 namespace=k8s.io Sep 8 23:50:50.344699 containerd[1512]: time="2025-09-08T23:50:50.344600063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:50:51.138090 kubelet[2711]: E0908 23:50:51.138040 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:51.152629 containerd[1512]: time="2025-09-08T23:50:51.152566274Z" level=info msg="CreateContainer within sandbox \"f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:50:51.197621 containerd[1512]: time="2025-09-08T23:50:51.197540321Z" level=info msg="CreateContainer within sandbox \"f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9fcb08bef4a806a7310e42680ed46a413efd707e4319ea2edb9088d392a940c8\"" Sep 8 23:50:51.203323 containerd[1512]: time="2025-09-08T23:50:51.203238664Z" level=info msg="StartContainer for \"9fcb08bef4a806a7310e42680ed46a413efd707e4319ea2edb9088d392a940c8\"" Sep 8 23:50:51.240671 systemd[1]: Started cri-containerd-9fcb08bef4a806a7310e42680ed46a413efd707e4319ea2edb9088d392a940c8.scope - libcontainer container 9fcb08bef4a806a7310e42680ed46a413efd707e4319ea2edb9088d392a940c8. Sep 8 23:50:51.284825 systemd[1]: cri-containerd-9fcb08bef4a806a7310e42680ed46a413efd707e4319ea2edb9088d392a940c8.scope: Deactivated successfully. Sep 8 23:50:51.285218 containerd[1512]: time="2025-09-08T23:50:51.285130841Z" level=info msg="StartContainer for \"9fcb08bef4a806a7310e42680ed46a413efd707e4319ea2edb9088d392a940c8\" returns successfully" Sep 8 23:50:51.331178 containerd[1512]: time="2025-09-08T23:50:51.331066632Z" level=info msg="shim disconnected" id=9fcb08bef4a806a7310e42680ed46a413efd707e4319ea2edb9088d392a940c8 namespace=k8s.io Sep 8 23:50:51.331178 containerd[1512]: time="2025-09-08T23:50:51.331153967Z" level=warning msg="cleaning up after shim disconnected" id=9fcb08bef4a806a7310e42680ed46a413efd707e4319ea2edb9088d392a940c8 namespace=k8s.io Sep 8 23:50:51.331178 containerd[1512]: time="2025-09-08T23:50:51.331167623Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:50:51.756110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fcb08bef4a806a7310e42680ed46a413efd707e4319ea2edb9088d392a940c8-rootfs.mount: Deactivated successfully. Sep 8 23:50:52.142122 kubelet[2711]: E0908 23:50:52.142076 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:52.159495 containerd[1512]: time="2025-09-08T23:50:52.159412312Z" level=info msg="CreateContainer within sandbox \"f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:50:52.219698 containerd[1512]: time="2025-09-08T23:50:52.219623841Z" level=info msg="CreateContainer within sandbox \"f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"046322d55c8cf7d2b9943bd6e99cb4be46e8e0349fd654130cead9c432103cfc\"" Sep 8 23:50:52.220493 containerd[1512]: time="2025-09-08T23:50:52.220230084Z" level=info msg="StartContainer for \"046322d55c8cf7d2b9943bd6e99cb4be46e8e0349fd654130cead9c432103cfc\"" Sep 8 23:50:52.258600 systemd[1]: Started cri-containerd-046322d55c8cf7d2b9943bd6e99cb4be46e8e0349fd654130cead9c432103cfc.scope - libcontainer container 046322d55c8cf7d2b9943bd6e99cb4be46e8e0349fd654130cead9c432103cfc. Sep 8 23:50:52.299272 systemd[1]: cri-containerd-046322d55c8cf7d2b9943bd6e99cb4be46e8e0349fd654130cead9c432103cfc.scope: Deactivated successfully. Sep 8 23:50:52.376250 containerd[1512]: time="2025-09-08T23:50:52.376187420Z" level=info msg="StartContainer for \"046322d55c8cf7d2b9943bd6e99cb4be46e8e0349fd654130cead9c432103cfc\" returns successfully" Sep 8 23:50:52.446615 containerd[1512]: time="2025-09-08T23:50:52.446411034Z" level=info msg="shim disconnected" id=046322d55c8cf7d2b9943bd6e99cb4be46e8e0349fd654130cead9c432103cfc namespace=k8s.io Sep 8 23:50:52.446615 containerd[1512]: time="2025-09-08T23:50:52.446495612Z" level=warning msg="cleaning up after shim disconnected" id=046322d55c8cf7d2b9943bd6e99cb4be46e8e0349fd654130cead9c432103cfc namespace=k8s.io Sep 8 23:50:52.446615 containerd[1512]: time="2025-09-08T23:50:52.446511814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:50:52.756653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-046322d55c8cf7d2b9943bd6e99cb4be46e8e0349fd654130cead9c432103cfc-rootfs.mount: Deactivated successfully. Sep 8 23:50:53.145227 kubelet[2711]: E0908 23:50:53.145194 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:53.375932 containerd[1512]: time="2025-09-08T23:50:53.375873527Z" level=info msg="CreateContainer within sandbox \"f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:50:53.931716 containerd[1512]: time="2025-09-08T23:50:53.931631490Z" level=info msg="CreateContainer within sandbox \"f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"10ec4cc1a0340983d4cb630ca6bde20ad86baa0eec3b3fc02132117e11d5740a\"" Sep 8 23:50:53.932421 containerd[1512]: time="2025-09-08T23:50:53.932357418Z" level=info msg="StartContainer for \"10ec4cc1a0340983d4cb630ca6bde20ad86baa0eec3b3fc02132117e11d5740a\"" Sep 8 23:50:53.970515 systemd[1]: Started cri-containerd-10ec4cc1a0340983d4cb630ca6bde20ad86baa0eec3b3fc02132117e11d5740a.scope - libcontainer container 10ec4cc1a0340983d4cb630ca6bde20ad86baa0eec3b3fc02132117e11d5740a. Sep 8 23:50:53.995952 systemd[1]: cri-containerd-10ec4cc1a0340983d4cb630ca6bde20ad86baa0eec3b3fc02132117e11d5740a.scope: Deactivated successfully. Sep 8 23:50:54.113715 containerd[1512]: time="2025-09-08T23:50:54.113634383Z" level=info msg="StartContainer for \"10ec4cc1a0340983d4cb630ca6bde20ad86baa0eec3b3fc02132117e11d5740a\" returns successfully" Sep 8 23:50:54.137558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10ec4cc1a0340983d4cb630ca6bde20ad86baa0eec3b3fc02132117e11d5740a-rootfs.mount: Deactivated successfully. Sep 8 23:50:54.148002 containerd[1512]: time="2025-09-08T23:50:54.147941872Z" level=info msg="shim disconnected" id=10ec4cc1a0340983d4cb630ca6bde20ad86baa0eec3b3fc02132117e11d5740a namespace=k8s.io Sep 8 23:50:54.148002 containerd[1512]: time="2025-09-08T23:50:54.147989362Z" level=warning msg="cleaning up after shim disconnected" id=10ec4cc1a0340983d4cb630ca6bde20ad86baa0eec3b3fc02132117e11d5740a namespace=k8s.io Sep 8 23:50:54.148002 containerd[1512]: time="2025-09-08T23:50:54.147997497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:50:54.149603 kubelet[2711]: E0908 23:50:54.149425 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:54.753499 containerd[1512]: time="2025-09-08T23:50:54.753428650Z" level=info msg="StopPodSandbox for \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\"" Sep 8 23:50:54.754043 containerd[1512]: time="2025-09-08T23:50:54.753578692Z" level=info msg="TearDown network for sandbox \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\" successfully" Sep 8 23:50:54.754043 containerd[1512]: time="2025-09-08T23:50:54.753595274Z" level=info msg="StopPodSandbox for \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\" returns successfully" Sep 8 23:50:54.754442 containerd[1512]: time="2025-09-08T23:50:54.754387828Z" level=info msg="RemovePodSandbox for \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\"" Sep 8 23:50:54.754516 containerd[1512]: time="2025-09-08T23:50:54.754451017Z" level=info msg="Forcibly stopping sandbox \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\"" Sep 8 23:50:54.754609 containerd[1512]: time="2025-09-08T23:50:54.754551196Z" level=info msg="TearDown network for sandbox \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\" successfully" Sep 8 23:50:54.759808 containerd[1512]: time="2025-09-08T23:50:54.759742632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 8 23:50:54.759885 containerd[1512]: time="2025-09-08T23:50:54.759830387Z" level=info msg="RemovePodSandbox \"6c4c87e41c23bd4334e355ca9591ee64344fdac2c8e925c5945b640e11f1ad52\" returns successfully" Sep 8 23:50:54.760284 containerd[1512]: time="2025-09-08T23:50:54.760251241Z" level=info msg="StopPodSandbox for \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\"" Sep 8 23:50:54.760505 containerd[1512]: time="2025-09-08T23:50:54.760405351Z" level=info msg="TearDown network for sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" successfully" Sep 8 23:50:54.760505 containerd[1512]: time="2025-09-08T23:50:54.760433164Z" level=info msg="StopPodSandbox for \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" returns successfully" Sep 8 23:50:54.760836 containerd[1512]: time="2025-09-08T23:50:54.760807950Z" level=info msg="RemovePodSandbox for \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\"" Sep 8 23:50:54.760836 containerd[1512]: time="2025-09-08T23:50:54.760832697Z" level=info msg="Forcibly stopping sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\"" Sep 8 23:50:54.761021 containerd[1512]: time="2025-09-08T23:50:54.760906175Z" level=info msg="TearDown network for sandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" successfully" Sep 8 23:50:54.764562 containerd[1512]: time="2025-09-08T23:50:54.764514897Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 8 23:50:54.764562 containerd[1512]: time="2025-09-08T23:50:54.764559252Z" level=info msg="RemovePodSandbox \"7f54bb257edfddaeaff3d6e06d56c5e5b3e7089af7842a69bc13ea19b859ccce\" returns successfully" Sep 8 23:50:54.844364 kubelet[2711]: E0908 23:50:54.844315 2711 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:50:55.153191 kubelet[2711]: E0908 23:50:55.153148 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:55.159625 containerd[1512]: time="2025-09-08T23:50:55.159573995Z" level=info msg="CreateContainer within sandbox \"f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:50:55.179613 containerd[1512]: time="2025-09-08T23:50:55.179562492Z" level=info msg="CreateContainer within sandbox \"f2b31983849f538778717cf5daa1dad94ce60530984a8df4cefcdc50df5b99e3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7513047cfcc63f5274cf52ce73ea23fcad287bf8d95f4fff33b7d84bb9ef17ac\"" Sep 8 23:50:55.180095 containerd[1512]: time="2025-09-08T23:50:55.180054118Z" level=info msg="StartContainer for \"7513047cfcc63f5274cf52ce73ea23fcad287bf8d95f4fff33b7d84bb9ef17ac\"" Sep 8 23:50:55.214509 systemd[1]: Started cri-containerd-7513047cfcc63f5274cf52ce73ea23fcad287bf8d95f4fff33b7d84bb9ef17ac.scope - libcontainer container 7513047cfcc63f5274cf52ce73ea23fcad287bf8d95f4fff33b7d84bb9ef17ac. Sep 8 23:50:55.247355 containerd[1512]: time="2025-09-08T23:50:55.247305623Z" level=info msg="StartContainer for \"7513047cfcc63f5274cf52ce73ea23fcad287bf8d95f4fff33b7d84bb9ef17ac\" returns successfully" Sep 8 23:50:55.695368 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 8 23:50:56.157875 kubelet[2711]: E0908 23:50:56.157836 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:56.173159 systemd[1]: run-containerd-runc-k8s.io-7513047cfcc63f5274cf52ce73ea23fcad287bf8d95f4fff33b7d84bb9ef17ac-runc.kPOmmE.mount: Deactivated successfully. Sep 8 23:50:56.176538 kubelet[2711]: I0908 23:50:56.176175 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-msbft" podStartSLOduration=7.176150398 podStartE2EDuration="7.176150398s" podCreationTimestamp="2025-09-08 23:50:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:50:56.175710639 +0000 UTC m=+121.519000814" watchObservedRunningTime="2025-09-08 23:50:56.176150398 +0000 UTC m=+121.519440574" Sep 8 23:50:56.312033 systemd[1]: run-containerd-runc-k8s.io-7513047cfcc63f5274cf52ce73ea23fcad287bf8d95f4fff33b7d84bb9ef17ac-runc.nMue5b.mount: Deactivated successfully. Sep 8 23:50:57.162140 kubelet[2711]: E0908 23:50:57.162087 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:57.961979 kubelet[2711]: I0908 23:50:57.961907 2711 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-08T23:50:57Z","lastTransitionTime":"2025-09-08T23:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 8 23:50:58.163677 kubelet[2711]: E0908 23:50:58.163611 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:59.224483 systemd-networkd[1418]: lxc_health: Link UP Sep 8 23:50:59.224922 systemd-networkd[1418]: lxc_health: Gained carrier Sep 8 23:50:59.730776 update_engine[1489]: I20250908 23:50:59.730664 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 8 23:50:59.731370 update_engine[1489]: I20250908 23:50:59.730990 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 8 23:50:59.731370 update_engine[1489]: I20250908 23:50:59.731331 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 8 23:50:59.742792 update_engine[1489]: E20250908 23:50:59.742664 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 8 23:50:59.742792 update_engine[1489]: I20250908 23:50:59.742806 1489 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 8 23:50:59.743072 update_engine[1489]: I20250908 23:50:59.742825 1489 omaha_request_action.cc:617] Omaha request response: Sep 8 23:50:59.743072 update_engine[1489]: E20250908 23:50:59.742980 1489 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 8 23:50:59.743149 update_engine[1489]: I20250908 23:50:59.743117 1489 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 8 23:50:59.743149 update_engine[1489]: I20250908 23:50:59.743132 1489 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 8 23:50:59.743149 update_engine[1489]: I20250908 23:50:59.743144 1489 update_attempter.cc:306] Processing Done. Sep 8 23:50:59.743275 update_engine[1489]: E20250908 23:50:59.743187 1489 update_attempter.cc:619] Update failed. Sep 8 23:50:59.743275 update_engine[1489]: I20250908 23:50:59.743204 1489 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 8 23:50:59.743275 update_engine[1489]: I20250908 23:50:59.743214 1489 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 8 23:50:59.743275 update_engine[1489]: I20250908 23:50:59.743224 1489 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 8 23:50:59.743518 update_engine[1489]: I20250908 23:50:59.743438 1489 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 8 23:50:59.743518 update_engine[1489]: I20250908 23:50:59.743490 1489 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 8 23:50:59.743518 update_engine[1489]: I20250908 23:50:59.743498 1489 omaha_request_action.cc:272] Request: Sep 8 23:50:59.743518 update_engine[1489]: Sep 8 23:50:59.743518 update_engine[1489]: Sep 8 23:50:59.743518 update_engine[1489]: Sep 8 23:50:59.743518 update_engine[1489]: Sep 8 23:50:59.743518 update_engine[1489]: Sep 8 23:50:59.743518 update_engine[1489]: Sep 8 23:50:59.743518 update_engine[1489]: I20250908 23:50:59.743506 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 8 23:50:59.744193 update_engine[1489]: I20250908 23:50:59.743727 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 8 23:50:59.744193 update_engine[1489]: I20250908 23:50:59.744005 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 8 23:50:59.744748 locksmithd[1530]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 8 23:50:59.757963 update_engine[1489]: E20250908 23:50:59.757864 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 8 23:50:59.758027 update_engine[1489]: I20250908 23:50:59.758001 1489 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 8 23:50:59.758085 update_engine[1489]: I20250908 23:50:59.758019 1489 omaha_request_action.cc:617] Omaha request response: Sep 8 23:50:59.758085 update_engine[1489]: I20250908 23:50:59.758035 1489 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 8 23:50:59.758085 update_engine[1489]: I20250908 23:50:59.758046 1489 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 8 23:50:59.758085 update_engine[1489]: I20250908 23:50:59.758059 1489 update_attempter.cc:306] Processing Done. Sep 8 23:50:59.758085 update_engine[1489]: I20250908 23:50:59.758072 1489 update_attempter.cc:310] Error event sent. Sep 8 23:50:59.758225 update_engine[1489]: I20250908 23:50:59.758101 1489 update_check_scheduler.cc:74] Next update check in 41m29s Sep 8 23:50:59.759005 locksmithd[1530]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 8 23:50:59.924103 kubelet[2711]: E0908 23:50:59.924055 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:51:00.169530 kubelet[2711]: E0908 23:51:00.169476 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:51:00.542498 systemd-networkd[1418]: lxc_health: Gained IPv6LL Sep 8 23:51:00.549928 systemd[1]: run-containerd-runc-k8s.io-7513047cfcc63f5274cf52ce73ea23fcad287bf8d95f4fff33b7d84bb9ef17ac-runc.a4bx6X.mount: Deactivated successfully. Sep 8 23:51:01.171474 kubelet[2711]: E0908 23:51:01.171427 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:51:04.897891 sshd[4673]: Connection closed by 10.0.0.1 port 42438 Sep 8 23:51:04.898562 sshd-session[4666]: pam_unix(sshd:session): session closed for user core Sep 8 23:51:04.903695 systemd[1]: sshd@32-10.0.0.19:22-10.0.0.1:42438.service: Deactivated successfully. Sep 8 23:51:04.906355 systemd[1]: session-33.scope: Deactivated successfully. Sep 8 23:51:04.907159 systemd-logind[1487]: Session 33 logged out. Waiting for processes to exit. Sep 8 23:51:04.908326 systemd-logind[1487]: Removed session 33.