Sep 9 00:02:19.917370 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:08:00 -00 2025 Sep 9 00:02:19.917394 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 9 00:02:19.917406 kernel: BIOS-provided physical RAM map: Sep 9 00:02:19.917413 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 9 00:02:19.917420 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 9 00:02:19.917427 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 9 00:02:19.917435 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 9 00:02:19.917442 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 9 00:02:19.917448 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 00:02:19.917458 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 9 00:02:19.917465 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:02:19.917472 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 9 00:02:19.917482 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:02:19.917489 kernel: NX (Execute Disable) protection: active Sep 9 00:02:19.917497 kernel: APIC: Static calls initialized Sep 9 00:02:19.917510 kernel: SMBIOS 2.8 present. Sep 9 00:02:19.917518 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 9 00:02:19.917525 kernel: Hypervisor detected: KVM Sep 9 00:02:19.917533 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:02:19.917540 kernel: kvm-clock: using sched offset of 3814428215 cycles Sep 9 00:02:19.917548 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:02:19.917556 kernel: tsc: Detected 2794.748 MHz processor Sep 9 00:02:19.917563 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:02:19.917571 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:02:19.917579 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 9 00:02:19.917590 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 9 00:02:19.917597 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:02:19.917605 kernel: Using GB pages for direct mapping Sep 9 00:02:19.917613 kernel: ACPI: Early table checksum verification disabled Sep 9 00:02:19.917620 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 9 00:02:19.917628 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:02:19.917635 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:02:19.917643 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:02:19.917650 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 9 00:02:19.917661 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:02:19.917677 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:02:19.917685 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:02:19.917692 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:02:19.917700 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 9 00:02:19.917707 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 9 00:02:19.917720 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 9 00:02:19.917730 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 9 00:02:19.917738 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 9 00:02:19.917746 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 9 00:02:19.917754 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 9 00:02:19.917764 kernel: No NUMA configuration found Sep 9 00:02:19.917771 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 9 00:02:19.917779 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 9 00:02:19.917790 kernel: Zone ranges: Sep 9 00:02:19.917798 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:02:19.917806 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 9 00:02:19.917813 kernel: Normal empty Sep 9 00:02:19.917822 kernel: Movable zone start for each node Sep 9 00:02:19.917831 kernel: Early memory node ranges Sep 9 00:02:19.917838 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 9 00:02:19.917846 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 9 00:02:19.917853 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 9 00:02:19.917863 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:02:19.917873 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 00:02:19.917881 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 9 00:02:19.917889 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:02:19.917896 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:02:19.917904 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:02:19.917911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:02:19.917919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:02:19.917927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:02:19.917934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:02:19.917945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:02:19.917952 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:02:19.917960 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:02:19.917967 kernel: TSC deadline timer available Sep 9 00:02:19.917975 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 9 00:02:19.917982 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:02:19.917990 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:02:19.918000 kernel: kvm-guest: setup PV sched yield Sep 9 00:02:19.918007 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 9 00:02:19.918018 kernel: Booting paravirtualized kernel on KVM Sep 9 00:02:19.918026 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:02:19.918033 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:02:19.918041 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 9 00:02:19.918049 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 9 00:02:19.918056 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:02:19.918063 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:02:19.918071 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:02:19.918122 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 9 00:02:19.918154 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:02:19.918163 kernel: random: crng init done Sep 9 00:02:19.918171 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:02:19.918178 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:02:19.918186 kernel: Fallback order for Node 0: 0 Sep 9 00:02:19.918194 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 9 00:02:19.918201 kernel: Policy zone: DMA32 Sep 9 00:02:19.918209 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:02:19.918221 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43504K init, 1572K bss, 138948K reserved, 0K cma-reserved) Sep 9 00:02:19.918228 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:02:19.918236 kernel: ftrace: allocating 37943 entries in 149 pages Sep 9 00:02:19.918244 kernel: ftrace: allocated 149 pages with 4 groups Sep 9 00:02:19.918252 kernel: Dynamic Preempt: voluntary Sep 9 00:02:19.918259 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:02:19.918267 kernel: rcu: RCU event tracing is enabled. Sep 9 00:02:19.918275 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:02:19.918283 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:02:19.918294 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:02:19.918302 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:02:19.918309 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:02:19.918320 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:02:19.918328 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:02:19.918335 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:02:19.918343 kernel: Console: colour VGA+ 80x25 Sep 9 00:02:19.918351 kernel: printk: console [ttyS0] enabled Sep 9 00:02:19.918358 kernel: ACPI: Core revision 20230628 Sep 9 00:02:19.918369 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:02:19.918377 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:02:19.918384 kernel: x2apic enabled Sep 9 00:02:19.918392 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:02:19.918400 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:02:19.918407 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:02:19.918415 kernel: kvm-guest: setup PV IPIs Sep 9 00:02:19.918433 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:02:19.918441 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 9 00:02:19.918449 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 00:02:19.918457 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:02:19.918465 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:02:19.918475 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:02:19.918483 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:02:19.918491 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:02:19.918499 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:02:19.918507 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:02:19.918518 kernel: active return thunk: retbleed_return_thunk Sep 9 00:02:19.918528 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:02:19.918537 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:02:19.918545 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:02:19.918553 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:02:19.918561 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:02:19.918569 kernel: active return thunk: srso_return_thunk Sep 9 00:02:19.918577 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:02:19.918588 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:02:19.918596 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:02:19.918604 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:02:19.918612 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:02:19.918620 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:02:19.918628 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:02:19.918636 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:02:19.918644 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:02:19.918651 kernel: landlock: Up and running. Sep 9 00:02:19.918662 kernel: SELinux: Initializing. Sep 9 00:02:19.918679 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:02:19.918687 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:02:19.918695 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:02:19.918703 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:02:19.918711 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:02:19.918719 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:02:19.918730 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:02:19.918738 kernel: ... version: 0 Sep 9 00:02:19.918749 kernel: ... bit width: 48 Sep 9 00:02:19.918757 kernel: ... generic registers: 6 Sep 9 00:02:19.918765 kernel: ... value mask: 0000ffffffffffff Sep 9 00:02:19.918772 kernel: ... max period: 00007fffffffffff Sep 9 00:02:19.918780 kernel: ... fixed-purpose events: 0 Sep 9 00:02:19.918788 kernel: ... event mask: 000000000000003f Sep 9 00:02:19.918796 kernel: signal: max sigframe size: 1776 Sep 9 00:02:19.918804 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:02:19.918812 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:02:19.918823 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:02:19.918831 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:02:19.918839 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:02:19.918846 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:02:19.918854 kernel: smpboot: Max logical packages: 1 Sep 9 00:02:19.918862 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 00:02:19.918870 kernel: devtmpfs: initialized Sep 9 00:02:19.918878 kernel: x86/mm: Memory block size: 128MB Sep 9 00:02:19.918886 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:02:19.918896 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:02:19.918904 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:02:19.918913 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:02:19.918922 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:02:19.918931 kernel: audit: type=2000 audit(1757376139.873:1): state=initialized audit_enabled=0 res=1 Sep 9 00:02:19.918939 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:02:19.918949 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:02:19.918957 kernel: cpuidle: using governor menu Sep 9 00:02:19.918967 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:02:19.918978 kernel: dca service started, version 1.12.1 Sep 9 00:02:19.918986 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 9 00:02:19.918994 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 00:02:19.919002 kernel: PCI: Using configuration type 1 for base access Sep 9 00:02:19.919010 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:02:19.919018 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:02:19.919026 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:02:19.919034 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:02:19.919042 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:02:19.919053 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:02:19.919061 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:02:19.919069 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:02:19.919076 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:02:19.919097 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 9 00:02:19.919104 kernel: ACPI: Interpreter enabled Sep 9 00:02:19.919112 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:02:19.919120 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:02:19.919128 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:02:19.919140 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:02:19.919148 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:02:19.919156 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:02:19.919396 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:02:19.919551 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:02:19.919697 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:02:19.919708 kernel: PCI host bridge to bus 0000:00 Sep 9 00:02:19.919868 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:02:19.919992 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:02:19.920130 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:02:19.920254 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 9 00:02:19.920376 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 00:02:19.920496 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 9 00:02:19.920617 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:02:19.920793 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 9 00:02:19.920945 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 9 00:02:19.921095 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 9 00:02:19.921232 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 9 00:02:19.921364 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 9 00:02:19.921494 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:02:19.921660 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:02:19.921810 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 9 00:02:19.921946 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 9 00:02:19.922077 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 9 00:02:19.922251 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 9 00:02:19.922383 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 9 00:02:19.922516 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 9 00:02:19.922655 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 9 00:02:19.922818 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 9 00:02:19.922958 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 9 00:02:19.923116 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 9 00:02:19.923252 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 9 00:02:19.923385 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 9 00:02:19.923539 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 9 00:02:19.923690 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:02:19.923843 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 9 00:02:19.923976 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 9 00:02:19.924136 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 9 00:02:19.924290 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 9 00:02:19.924423 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 9 00:02:19.924434 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:02:19.924448 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:02:19.924456 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:02:19.924464 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:02:19.924472 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:02:19.924480 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:02:19.924488 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:02:19.924496 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:02:19.924504 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:02:19.924512 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:02:19.924522 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:02:19.924530 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:02:19.924539 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:02:19.924547 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:02:19.924554 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:02:19.924562 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:02:19.924570 kernel: iommu: Default domain type: Translated Sep 9 00:02:19.924578 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:02:19.924586 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:02:19.924597 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:02:19.924605 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 9 00:02:19.924613 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 9 00:02:19.924757 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:02:19.924889 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:02:19.925021 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:02:19.925031 kernel: vgaarb: loaded Sep 9 00:02:19.925039 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:02:19.925051 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:02:19.925060 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:02:19.925067 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:02:19.925076 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:02:19.925098 kernel: pnp: PnP ACPI init Sep 9 00:02:19.925274 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 00:02:19.925287 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:02:19.925295 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:02:19.925308 kernel: NET: Registered PF_INET protocol family Sep 9 00:02:19.925316 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:02:19.925324 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:02:19.925332 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:02:19.925340 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:02:19.925348 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:02:19.925356 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:02:19.925364 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:02:19.925372 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:02:19.925383 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:02:19.925391 kernel: NET: Registered PF_XDP protocol family Sep 9 00:02:19.925514 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:02:19.925635 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:02:19.925768 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:02:19.925890 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 9 00:02:19.926019 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 00:02:19.926225 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 9 00:02:19.926242 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:02:19.926251 kernel: Initialise system trusted keyrings Sep 9 00:02:19.926259 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:02:19.926267 kernel: Key type asymmetric registered Sep 9 00:02:19.926275 kernel: Asymmetric key parser 'x509' registered Sep 9 00:02:19.926283 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 9 00:02:19.926291 kernel: io scheduler mq-deadline registered Sep 9 00:02:19.926299 kernel: io scheduler kyber registered Sep 9 00:02:19.926307 kernel: io scheduler bfq registered Sep 9 00:02:19.926318 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:02:19.926326 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:02:19.926334 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:02:19.926342 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:02:19.926350 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:02:19.926358 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:02:19.926366 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:02:19.926374 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:02:19.926382 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:02:19.926539 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:02:19.926556 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:02:19.926689 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:02:19.926701 kernel: hpet: Lost 1 RTC interrupts Sep 9 00:02:19.926823 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:02:19 UTC (1757376139) Sep 9 00:02:19.926947 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 9 00:02:19.926958 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:02:19.926966 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:02:19.926978 kernel: Segment Routing with IPv6 Sep 9 00:02:19.926986 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:02:19.926994 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:02:19.927002 kernel: Key type dns_resolver registered Sep 9 00:02:19.927010 kernel: IPI shorthand broadcast: enabled Sep 9 00:02:19.927018 kernel: sched_clock: Marking stable (924002883, 124105475)->(1068663315, -20554957) Sep 9 00:02:19.927026 kernel: registered taskstats version 1 Sep 9 00:02:19.927034 kernel: Loading compiled-in X.509 certificates Sep 9 00:02:19.927042 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: c16a276a56169aed770943c7e14b6e7e5f4f7133' Sep 9 00:02:19.927052 kernel: Key type .fscrypt registered Sep 9 00:02:19.927060 kernel: Key type fscrypt-provisioning registered Sep 9 00:02:19.927068 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:02:19.927076 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:02:19.927097 kernel: ima: No architecture policies found Sep 9 00:02:19.927105 kernel: clk: Disabling unused clocks Sep 9 00:02:19.927113 kernel: Freeing unused kernel image (initmem) memory: 43504K Sep 9 00:02:19.927121 kernel: Write protecting the kernel read-only data: 38912k Sep 9 00:02:19.927129 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 9 00:02:19.927140 kernel: Run /init as init process Sep 9 00:02:19.927148 kernel: with arguments: Sep 9 00:02:19.927156 kernel: /init Sep 9 00:02:19.927163 kernel: with environment: Sep 9 00:02:19.927171 kernel: HOME=/ Sep 9 00:02:19.927179 kernel: TERM=linux Sep 9 00:02:19.927187 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:02:19.927196 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:02:19.927207 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:02:19.927219 systemd[1]: Detected virtualization kvm. Sep 9 00:02:19.927228 systemd[1]: Detected architecture x86-64. Sep 9 00:02:19.927236 systemd[1]: Running in initrd. Sep 9 00:02:19.927244 systemd[1]: No hostname configured, using default hostname. Sep 9 00:02:19.927253 systemd[1]: Hostname set to . Sep 9 00:02:19.927261 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:02:19.927270 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:02:19.927282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:02:19.927303 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:02:19.927315 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:02:19.927324 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:02:19.927333 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:02:19.927345 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:02:19.927356 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:02:19.927364 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:02:19.927373 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:02:19.927382 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:02:19.927391 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:02:19.927400 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:02:19.927408 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:02:19.927420 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:02:19.927429 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:02:19.927438 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:02:19.927447 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:02:19.927455 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:02:19.927464 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:02:19.927473 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:02:19.927482 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:02:19.927490 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:02:19.927502 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:02:19.927511 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:02:19.927519 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:02:19.927528 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:02:19.927537 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:02:19.927545 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:02:19.927554 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:02:19.927563 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:02:19.927575 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:02:19.927585 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:02:19.927624 systemd-journald[194]: Collecting audit messages is disabled. Sep 9 00:02:19.927649 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:02:19.927658 systemd-journald[194]: Journal started Sep 9 00:02:19.927689 systemd-journald[194]: Runtime Journal (/run/log/journal/b4f9c58053ac40b492ab3b8153139afe) is 6M, max 48.4M, 42.3M free. Sep 9 00:02:19.909351 systemd-modules-load[195]: Inserted module 'overlay' Sep 9 00:02:19.946374 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:02:19.946401 kernel: Bridge firewalling registered Sep 9 00:02:19.936977 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 9 00:02:19.949023 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:02:19.949518 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:02:19.951852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:02:19.954203 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:02:19.968253 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:02:19.969192 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:02:19.970391 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:02:19.974578 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:02:19.982681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:02:19.986790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:02:19.993360 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:02:20.000214 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:02:20.000507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:02:20.004451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:02:20.016594 dracut-cmdline[228]: dracut-dracut-053 Sep 9 00:02:20.019894 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 9 00:02:20.046300 systemd-resolved[230]: Positive Trust Anchors: Sep 9 00:02:20.046316 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:02:20.046348 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:02:20.048981 systemd-resolved[230]: Defaulting to hostname 'linux'. Sep 9 00:02:20.050385 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:02:20.056496 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:02:20.124121 kernel: SCSI subsystem initialized Sep 9 00:02:20.134114 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:02:20.144109 kernel: iscsi: registered transport (tcp) Sep 9 00:02:20.166676 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:02:20.166710 kernel: QLogic iSCSI HBA Driver Sep 9 00:02:20.226723 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:02:20.236235 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:02:20.263490 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:02:20.263526 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:02:20.264515 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:02:20.347130 kernel: raid6: avx2x4 gen() 30149 MB/s Sep 9 00:02:20.371148 kernel: raid6: avx2x2 gen() 25857 MB/s Sep 9 00:02:20.388404 kernel: raid6: avx2x1 gen() 22920 MB/s Sep 9 00:02:20.388481 kernel: raid6: using algorithm avx2x4 gen() 30149 MB/s Sep 9 00:02:20.406473 kernel: raid6: .... xor() 7198 MB/s, rmw enabled Sep 9 00:02:20.406568 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:02:20.434138 kernel: xor: automatically using best checksumming function avx Sep 9 00:02:20.601137 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:02:20.614759 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:02:20.626235 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:02:20.642321 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 9 00:02:20.648198 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:02:20.656312 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:02:20.670848 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Sep 9 00:02:20.706184 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:02:20.720345 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:02:20.802217 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:02:20.815224 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:02:20.828767 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:02:20.832072 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:02:20.834734 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:02:20.837133 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:02:20.848252 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:02:20.856116 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:02:20.858706 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:02:20.864103 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:02:20.866452 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:02:20.866480 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:02:20.869110 kernel: GPT:9289727 != 19775487 Sep 9 00:02:20.869142 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:02:20.869154 kernel: GPT:9289727 != 19775487 Sep 9 00:02:20.869164 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:02:20.869175 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:02:20.886124 kernel: AVX2 version of gcm_enc/dec engaged. Sep 9 00:02:20.886208 kernel: libata version 3.00 loaded. Sep 9 00:02:20.890147 kernel: AES CTR mode by8 optimization enabled Sep 9 00:02:20.894288 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:02:20.894571 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:02:20.900279 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:02:20.904412 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 9 00:02:20.904612 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:02:20.900595 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:02:20.907536 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:02:20.910716 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:02:20.915492 kernel: BTRFS: device fsid 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (465) Sep 9 00:02:20.910858 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:02:20.915660 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:02:20.930422 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (475) Sep 9 00:02:20.930443 kernel: scsi host0: ahci Sep 9 00:02:20.932396 kernel: scsi host1: ahci Sep 9 00:02:20.934104 kernel: scsi host2: ahci Sep 9 00:02:20.935098 kernel: scsi host3: ahci Sep 9 00:02:20.935279 kernel: scsi host4: ahci Sep 9 00:02:21.007108 kernel: scsi host5: ahci Sep 9 00:02:21.007383 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 9 00:02:21.007467 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:02:21.014850 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 9 00:02:21.014874 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 9 00:02:21.014886 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 9 00:02:21.014897 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 9 00:02:21.014907 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 9 00:02:21.038892 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:02:21.059733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:02:21.074242 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:02:21.074456 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:02:21.088155 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:02:21.106550 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:02:21.121573 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:02:21.125191 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:02:21.134132 disk-uuid[556]: Primary Header is updated. Sep 9 00:02:21.134132 disk-uuid[556]: Secondary Entries is updated. Sep 9 00:02:21.134132 disk-uuid[556]: Secondary Header is updated. Sep 9 00:02:21.139133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:02:21.144003 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:02:21.146666 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:02:21.437116 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:02:21.437178 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:02:21.437190 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:02:21.438128 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:02:21.439127 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:02:21.440112 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:02:21.440141 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:02:21.441257 kernel: ata3.00: applying bridge limits Sep 9 00:02:21.441328 kernel: ata3.00: configured for UDMA/100 Sep 9 00:02:21.443114 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:02:21.487649 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:02:21.487920 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:02:21.502106 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:02:22.152422 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:02:22.152488 disk-uuid[561]: The operation has completed successfully. Sep 9 00:02:22.186852 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:02:22.186979 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:02:22.234314 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:02:22.244799 sh[593]: Success Sep 9 00:02:22.257107 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 9 00:02:22.297919 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:02:22.317120 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:02:22.322719 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:02:22.335507 kernel: BTRFS info (device dm-0): first mount of filesystem 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf Sep 9 00:02:22.335559 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:02:22.335585 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:02:22.336792 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:02:22.337732 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:02:22.343866 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:02:22.345970 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:02:22.351384 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:02:22.352581 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:02:22.376134 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 9 00:02:22.376200 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:02:22.376215 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:02:22.380146 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:02:22.386122 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 9 00:02:22.493423 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:02:22.518425 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:02:22.549632 systemd-networkd[769]: lo: Link UP Sep 9 00:02:22.549646 systemd-networkd[769]: lo: Gained carrier Sep 9 00:02:22.552039 systemd-networkd[769]: Enumeration completed Sep 9 00:02:22.552206 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:02:22.552547 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:02:22.552553 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:02:22.553852 systemd-networkd[769]: eth0: Link UP Sep 9 00:02:22.553857 systemd-networkd[769]: eth0: Gained carrier Sep 9 00:02:22.553867 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:02:22.555967 systemd[1]: Reached target network.target - Network. Sep 9 00:02:22.571157 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.143/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:02:22.662030 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:02:22.702395 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:02:22.896530 ignition[774]: Ignition 2.20.0 Sep 9 00:02:22.896548 ignition[774]: Stage: fetch-offline Sep 9 00:02:22.896607 ignition[774]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:02:22.896618 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:02:22.896779 ignition[774]: parsed url from cmdline: "" Sep 9 00:02:22.896785 ignition[774]: no config URL provided Sep 9 00:02:22.896793 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:02:22.896807 ignition[774]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:02:22.896850 ignition[774]: op(1): [started] loading QEMU firmware config module Sep 9 00:02:22.896858 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:02:22.907122 ignition[774]: op(1): [finished] loading QEMU firmware config module Sep 9 00:02:22.947054 ignition[774]: parsing config with SHA512: 405c4cda0b985f5e8a7f8873e2302bee56fb2e69c476be668648564db7067446798bfbe54832e9c17285b6b59e9f859e7256af321ff4db2910f4d27cc2f9f33b Sep 9 00:02:22.953752 unknown[774]: fetched base config from "system" Sep 9 00:02:22.954778 unknown[774]: fetched user config from "qemu" Sep 9 00:02:22.956305 ignition[774]: fetch-offline: fetch-offline passed Sep 9 00:02:22.956518 ignition[774]: Ignition finished successfully Sep 9 00:02:22.960709 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:02:22.961132 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:02:22.970412 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:02:23.003969 ignition[784]: Ignition 2.20.0 Sep 9 00:02:23.003999 ignition[784]: Stage: kargs Sep 9 00:02:23.004320 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:02:23.004336 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:02:23.006762 ignition[784]: kargs: kargs passed Sep 9 00:02:23.006853 ignition[784]: Ignition finished successfully Sep 9 00:02:23.012189 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:02:23.020422 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:02:23.036435 ignition[793]: Ignition 2.20.0 Sep 9 00:02:23.036450 ignition[793]: Stage: disks Sep 9 00:02:23.036682 ignition[793]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:02:23.036699 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:02:23.037855 ignition[793]: disks: disks passed Sep 9 00:02:23.040349 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:02:23.037919 ignition[793]: Ignition finished successfully Sep 9 00:02:23.041907 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:02:23.043650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:02:23.045589 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:02:23.047589 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:02:23.049721 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:02:23.060385 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:02:23.119821 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 00:02:23.294250 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:02:23.312271 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:02:23.404106 kernel: EXT4-fs (vda9): mounted filesystem 4436772e-5166-41e3-9cb5-50bbb91cbcf6 r/w with ordered data mode. Quota mode: none. Sep 9 00:02:23.404383 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:02:23.405036 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:02:23.416188 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:02:23.418139 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:02:23.419693 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:02:23.425395 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (811) Sep 9 00:02:23.425420 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 9 00:02:23.419737 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:02:23.431409 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:02:23.431433 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:02:23.419766 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:02:23.434646 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:02:23.428984 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:02:23.432504 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:02:23.436145 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:02:23.474025 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:02:23.479801 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:02:23.484030 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:02:23.488498 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:02:23.584669 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:02:23.597207 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:02:23.599120 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:02:23.605823 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:02:23.607944 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 9 00:02:23.630931 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:02:23.637701 ignition[927]: INFO : Ignition 2.20.0 Sep 9 00:02:23.637701 ignition[927]: INFO : Stage: mount Sep 9 00:02:23.639348 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:02:23.639348 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:02:23.639348 ignition[927]: INFO : mount: mount passed Sep 9 00:02:23.639348 ignition[927]: INFO : Ignition finished successfully Sep 9 00:02:23.644900 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:02:23.658228 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:02:24.304365 systemd-networkd[769]: eth0: Gained IPv6LL Sep 9 00:02:24.414243 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:02:24.422575 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (941) Sep 9 00:02:24.422607 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 9 00:02:24.422619 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:02:24.423397 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:02:24.427109 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:02:24.428529 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:02:24.452075 ignition[958]: INFO : Ignition 2.20.0 Sep 9 00:02:24.452075 ignition[958]: INFO : Stage: files Sep 9 00:02:24.454123 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:02:24.454123 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:02:24.454123 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:02:24.458942 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:02:24.458942 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:02:24.462251 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:02:24.463827 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:02:24.465147 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:02:24.464481 unknown[958]: wrote ssh authorized keys file for user: core Sep 9 00:02:24.467919 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 00:02:24.467919 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 00:02:24.513433 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:02:24.747786 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 00:02:24.747786 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:02:24.751927 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 00:02:25.001619 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:02:25.134418 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:02:25.134418 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:02:25.138597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 00:02:25.566146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:02:26.032424 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:02:26.032424 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 00:02:26.036867 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:02:26.036867 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:02:26.036867 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 00:02:26.036867 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 00:02:26.036867 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:02:26.036867 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:02:26.036867 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 00:02:26.036867 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:02:26.060381 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:02:26.065564 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:02:26.067324 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:02:26.067324 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:02:26.067324 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:02:26.067324 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:02:26.067324 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:02:26.067324 ignition[958]: INFO : files: files passed Sep 9 00:02:26.067324 ignition[958]: INFO : Ignition finished successfully Sep 9 00:02:26.069315 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:02:26.081297 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:02:26.083585 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:02:26.085382 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:02:26.085524 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:02:26.095000 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:02:26.098353 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:02:26.098353 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:02:26.103136 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:02:26.101278 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:02:26.103418 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:02:26.120437 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:02:26.146643 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:02:26.146823 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:02:26.149580 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:02:26.151136 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:02:26.188188 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:02:26.195244 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:02:26.210775 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:02:26.213721 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:02:26.228760 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:02:26.231579 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:02:26.233245 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:02:26.235475 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:02:26.235718 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:02:26.238226 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:02:26.239786 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:02:26.241822 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:02:26.243832 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:02:26.245873 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:02:26.248013 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:02:26.250128 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:02:26.252373 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:02:26.254298 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:02:26.256514 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:02:26.258235 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:02:26.258445 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:02:26.260579 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:02:26.261982 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:02:26.263999 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:02:26.264193 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:02:26.266216 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:02:26.266436 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:02:26.268583 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:02:26.268763 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:02:26.270530 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:02:26.272195 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:02:26.276169 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:02:26.277727 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:02:26.279669 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:02:26.281445 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:02:26.281603 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:02:26.283529 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:02:26.283636 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:02:26.285925 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:02:26.286064 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:02:26.288011 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:02:26.288239 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:02:26.296301 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:02:26.298854 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:02:26.300706 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:02:26.324585 ignition[1012]: INFO : Ignition 2.20.0 Sep 9 00:02:26.324585 ignition[1012]: INFO : Stage: umount Sep 9 00:02:26.324585 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:02:26.324585 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:02:26.324585 ignition[1012]: INFO : umount: umount passed Sep 9 00:02:26.324585 ignition[1012]: INFO : Ignition finished successfully Sep 9 00:02:26.300838 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:02:26.322452 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:02:26.322600 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:02:26.327985 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:02:26.328169 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:02:26.333871 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:02:26.334562 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:02:26.336640 systemd[1]: Stopped target network.target - Network. Sep 9 00:02:26.337724 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:02:26.337801 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:02:26.339682 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:02:26.339745 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:02:26.341618 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:02:26.341681 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:02:26.343868 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:02:26.343942 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:02:26.345831 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:02:26.347827 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:02:26.350976 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:02:26.352795 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:02:26.352942 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:02:26.357218 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:02:26.357519 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:02:26.357648 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:02:26.361010 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:02:26.361876 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:02:26.361956 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:02:26.376166 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:02:26.378003 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:02:26.378062 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:02:26.380254 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:02:26.380311 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:02:26.382786 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:02:26.382837 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:02:26.384739 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:02:26.384788 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:02:26.387029 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:02:26.390825 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:02:26.390893 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:02:26.404201 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:02:26.404356 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:02:26.407458 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:02:26.408500 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:02:26.412155 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:02:26.413128 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:02:26.415181 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:02:26.415228 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:02:26.418059 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:02:26.418990 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:02:26.421163 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:02:26.422031 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:02:26.424074 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:02:26.425014 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:02:26.441234 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:02:26.476988 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:02:26.477157 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:02:26.480329 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 00:02:26.480385 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:02:26.482800 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:02:26.482858 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:02:26.485038 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:02:26.485106 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:02:26.489373 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 00:02:26.489442 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:02:26.489926 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:02:26.490063 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:02:27.274376 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:02:27.274602 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:02:27.277855 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:02:27.278029 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:02:27.278148 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:02:27.292297 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:02:27.302903 systemd[1]: Switching root. Sep 9 00:02:27.342392 systemd-journald[194]: Journal stopped Sep 9 00:02:29.041611 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Sep 9 00:02:29.041696 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:02:29.041717 kernel: SELinux: policy capability open_perms=1 Sep 9 00:02:29.041734 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:02:29.041746 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:02:29.041758 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:02:29.041770 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:02:29.041783 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:02:29.041798 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:02:29.041810 kernel: audit: type=1403 audit(1757376147.982:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:02:29.041823 systemd[1]: Successfully loaded SELinux policy in 48.005ms. Sep 9 00:02:29.041845 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.347ms. Sep 9 00:02:29.041859 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:02:29.041872 systemd[1]: Detected virtualization kvm. Sep 9 00:02:29.041885 systemd[1]: Detected architecture x86-64. Sep 9 00:02:29.041897 systemd[1]: Detected first boot. Sep 9 00:02:29.041910 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:02:29.041926 zram_generator::config[1058]: No configuration found. Sep 9 00:02:29.041939 kernel: Guest personality initialized and is inactive Sep 9 00:02:29.041951 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:02:29.041963 kernel: Initialized host personality Sep 9 00:02:29.041975 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:02:29.041989 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:02:29.042003 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:02:29.042017 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:02:29.042032 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:02:29.042045 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:02:29.042058 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:02:29.042071 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:02:29.046273 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:02:29.046302 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:02:29.046322 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:02:29.046335 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:02:29.046348 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:02:29.046366 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:02:29.046391 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:02:29.046428 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:02:29.046633 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:02:29.046660 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:02:29.046674 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:02:29.046687 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:02:29.046704 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:02:29.046720 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:02:29.046759 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:02:29.046991 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:02:29.047021 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:02:29.047035 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:02:29.047048 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:02:29.047061 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:02:29.047073 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:02:29.047101 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:02:29.047124 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:02:29.047314 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:02:29.052242 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:02:29.052296 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:02:29.052311 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:02:29.052329 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:02:29.052343 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:02:29.052356 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:02:29.052370 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:02:29.052390 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:02:29.052412 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:02:29.052432 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:02:29.052447 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:02:29.052460 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:02:29.052474 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:02:29.052490 systemd[1]: Reached target machines.target - Containers. Sep 9 00:02:29.052503 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:02:29.052520 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:02:29.052536 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:02:29.052553 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:02:29.052566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:02:29.052579 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:02:29.052592 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:02:29.052605 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:02:29.052618 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:02:29.052631 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:02:29.052647 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:02:29.052660 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:02:29.052672 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:02:29.052685 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:02:29.052699 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:02:29.052712 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:02:29.052725 kernel: loop: module loaded Sep 9 00:02:29.052739 kernel: fuse: init (API version 7.39) Sep 9 00:02:29.052753 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:02:29.052767 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:02:29.052780 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:02:29.052800 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:02:29.052812 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:02:29.052826 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:02:29.052840 systemd[1]: Stopped verity-setup.service. Sep 9 00:02:29.052853 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:02:29.052869 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:02:29.052882 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:02:29.052896 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:02:29.052908 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:02:29.052921 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:02:29.052936 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:02:29.052950 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:02:29.052965 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:02:29.052978 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:02:29.052990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:02:29.053003 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:02:29.053018 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:02:29.053032 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:02:29.053044 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:02:29.053057 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:02:29.053069 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:02:29.053110 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:02:29.053125 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:02:29.053151 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:02:29.053168 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:02:29.053185 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:02:29.053209 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:02:29.053222 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:02:29.053235 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:02:29.053248 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:02:29.053261 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:02:29.053277 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:02:29.053290 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:02:29.053303 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:02:29.053315 kernel: ACPI: bus type drm_connector registered Sep 9 00:02:29.053328 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:02:29.053341 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:02:29.053354 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:02:29.053370 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:02:29.053383 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:02:29.053445 systemd-journald[1122]: Collecting audit messages is disabled. Sep 9 00:02:29.053472 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:02:29.053485 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:02:29.053498 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:02:29.053511 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:02:29.053529 systemd-journald[1122]: Journal started Sep 9 00:02:29.053553 systemd-journald[1122]: Runtime Journal (/run/log/journal/b4f9c58053ac40b492ab3b8153139afe) is 6M, max 48.4M, 42.3M free. Sep 9 00:02:28.613001 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:02:28.626724 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:02:28.627369 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:02:29.056115 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:02:29.060668 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:02:29.062611 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:02:29.063135 kernel: loop0: detected capacity change from 0 to 221472 Sep 9 00:02:29.066426 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:02:29.068679 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:02:29.070601 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:02:29.072453 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:02:29.078077 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Sep 9 00:02:29.078108 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Sep 9 00:02:29.087125 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:02:29.092801 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:02:29.098316 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:02:29.102665 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:02:29.103847 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:02:29.113254 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:02:29.115704 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:02:29.118265 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:02:29.123444 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 00:02:29.130522 kernel: loop1: detected capacity change from 0 to 138176 Sep 9 00:02:29.130626 systemd-journald[1122]: Time spent on flushing to /var/log/journal/b4f9c58053ac40b492ab3b8153139afe is 23.702ms for 987 entries. Sep 9 00:02:29.130626 systemd-journald[1122]: System Journal (/var/log/journal/b4f9c58053ac40b492ab3b8153139afe) is 8M, max 195.6M, 187.6M free. Sep 9 00:02:29.180945 systemd-journald[1122]: Received client request to flush runtime journal. Sep 9 00:02:29.181004 kernel: loop2: detected capacity change from 0 to 147912 Sep 9 00:02:29.161044 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 00:02:29.165984 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:02:29.183550 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:02:29.200946 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:02:29.221362 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:02:29.236165 kernel: loop3: detected capacity change from 0 to 221472 Sep 9 00:02:29.244039 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 9 00:02:29.244068 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 9 00:02:29.321172 kernel: loop4: detected capacity change from 0 to 138176 Sep 9 00:02:29.326341 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:02:29.337124 kernel: loop5: detected capacity change from 0 to 147912 Sep 9 00:02:29.347718 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:02:29.348470 (sd-merge)[1204]: Merged extensions into '/usr'. Sep 9 00:02:29.353905 systemd[1]: Reload requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:02:29.353923 systemd[1]: Reloading... Sep 9 00:02:29.448196 zram_generator::config[1229]: No configuration found. Sep 9 00:02:29.637727 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:02:29.718272 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:02:29.720127 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:02:29.720633 systemd[1]: Reloading finished in 366 ms. Sep 9 00:02:29.740571 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:02:29.742323 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:02:29.760193 systemd[1]: Starting ensure-sysext.service... Sep 9 00:02:29.762484 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:02:29.782259 systemd[1]: Reload requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:02:29.782277 systemd[1]: Reloading... Sep 9 00:02:29.811173 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:02:29.811506 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:02:29.812547 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:02:29.813674 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 9 00:02:29.813816 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 9 00:02:29.825252 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:02:29.825356 systemd-tmpfiles[1271]: Skipping /boot Sep 9 00:02:29.850134 zram_generator::config[1300]: No configuration found. Sep 9 00:02:29.948599 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:02:29.948895 systemd-tmpfiles[1271]: Skipping /boot Sep 9 00:02:30.085289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:02:30.187324 systemd[1]: Reloading finished in 404 ms. Sep 9 00:02:30.202265 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:02:30.222119 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:02:30.242410 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:02:30.245156 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:02:30.247706 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:02:30.251577 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:02:30.257349 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:02:30.261396 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:02:30.267714 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:02:30.267912 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:02:30.269903 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:02:30.273182 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:02:30.277344 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:02:30.278590 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:02:30.278716 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:02:30.281812 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:02:30.284091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:02:30.285703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:02:30.285969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:02:30.289417 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:02:30.289785 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:02:30.291716 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:02:30.292026 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:02:30.302312 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Sep 9 00:02:30.302745 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:02:30.310766 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:02:30.316735 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:02:30.316970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:02:30.321032 augenrules[1373]: No rules Sep 9 00:02:30.328420 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:02:30.331322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:02:30.337181 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:02:30.338556 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:02:30.338676 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:02:30.341496 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:02:30.342716 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:02:30.344490 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:02:30.346628 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:02:30.348631 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:02:30.350422 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:02:30.352941 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:02:30.355954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:02:30.356372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:02:30.358393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:02:30.358725 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:02:30.361061 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:02:30.361350 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:02:30.372944 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:02:30.409286 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:02:30.503128 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:02:30.504492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:02:30.524558 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:02:30.557794 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:02:30.563367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:02:30.567690 systemd-resolved[1343]: Positive Trust Anchors: Sep 9 00:02:30.568072 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:02:30.568186 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:02:30.569262 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:02:30.580115 systemd-resolved[1343]: Defaulting to hostname 'linux'. Sep 9 00:02:30.584706 augenrules[1412]: /sbin/augenrules: No change Sep 9 00:02:30.581701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:02:30.581865 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:02:30.588138 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1400) Sep 9 00:02:30.605489 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:02:30.607153 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:02:30.607297 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:02:30.609656 augenrules[1439]: No rules Sep 9 00:02:30.634350 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:02:30.642167 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:02:30.642524 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:02:30.656483 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:02:30.656773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:02:30.658116 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 00:02:30.663342 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:02:30.663670 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:02:30.666778 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:02:30.667117 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:02:30.669118 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:02:30.672341 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:02:30.672641 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:02:30.688707 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:02:30.692828 systemd[1]: Finished ensure-sysext.service. Sep 9 00:02:30.760622 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:02:30.761666 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 9 00:02:30.761955 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:02:30.748781 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:02:30.762045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:02:30.773598 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:02:30.776680 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:02:30.776821 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:02:30.782976 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:02:30.823118 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 9 00:02:30.855123 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:02:30.863626 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:02:30.870173 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:02:31.009963 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:02:31.031737 systemd-networkd[1436]: lo: Link UP Sep 9 00:02:31.031756 systemd-networkd[1436]: lo: Gained carrier Sep 9 00:02:31.038137 systemd-networkd[1436]: Enumeration completed Sep 9 00:02:31.041502 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:02:31.041513 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:02:31.045754 systemd-networkd[1436]: eth0: Link UP Sep 9 00:02:31.045763 systemd-networkd[1436]: eth0: Gained carrier Sep 9 00:02:31.045785 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:02:31.061037 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:02:31.072250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:02:31.077119 systemd[1]: Reached target network.target - Network. Sep 9 00:02:31.078278 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:02:31.093832 kernel: kvm_amd: TSC scaling supported Sep 9 00:02:31.093924 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:02:31.093947 kernel: kvm_amd: Nested Paging enabled Sep 9 00:02:31.093963 kernel: kvm_amd: LBR virtualization supported Sep 9 00:02:31.093980 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:02:31.093995 kernel: kvm_amd: Virtual GIF supported Sep 9 00:02:31.096623 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:02:31.100859 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:02:31.131199 systemd-networkd[1436]: eth0: DHCPv4 address 10.0.0.143/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:02:31.132823 systemd-timesyncd[1455]: Network configuration changed, trying to establish connection. Sep 9 00:02:32.432920 systemd-resolved[1343]: Clock change detected. Flushing caches. Sep 9 00:02:32.435157 systemd-timesyncd[1455]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:02:32.435247 systemd-timesyncd[1455]: Initial clock synchronization to Tue 2025-09-09 00:02:32.432862 UTC. Sep 9 00:02:32.435969 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:02:32.687703 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:02:32.724627 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 00:02:32.749931 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 00:02:32.769864 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:02:32.815743 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 00:02:32.825224 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:02:32.830735 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:02:32.835367 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:02:32.840610 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:02:32.844531 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:02:32.852553 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:02:32.856286 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:02:32.858179 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:02:32.858234 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:02:32.859692 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:02:32.866982 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:02:32.872604 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:02:32.877450 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:02:32.879774 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:02:32.881411 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:02:32.898072 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:02:32.906947 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:02:32.917832 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 00:02:32.925691 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:02:32.932045 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:02:32.943214 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:02:32.950918 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:02:32.950960 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:02:32.964772 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:02:32.980767 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:02:32.982388 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:02:32.997134 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:02:33.019087 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:02:33.022331 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:02:33.027266 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:02:33.029120 jq[1481]: false Sep 9 00:02:33.031854 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:02:33.038980 dbus-daemon[1480]: [system] SELinux support is enabled Sep 9 00:02:33.043041 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:02:33.132017 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:02:33.141140 extend-filesystems[1482]: Found loop3 Sep 9 00:02:33.143210 extend-filesystems[1482]: Found loop4 Sep 9 00:02:33.143210 extend-filesystems[1482]: Found loop5 Sep 9 00:02:33.143210 extend-filesystems[1482]: Found sr0 Sep 9 00:02:33.143210 extend-filesystems[1482]: Found vda Sep 9 00:02:33.143210 extend-filesystems[1482]: Found vda1 Sep 9 00:02:33.143210 extend-filesystems[1482]: Found vda2 Sep 9 00:02:33.143210 extend-filesystems[1482]: Found vda3 Sep 9 00:02:33.143210 extend-filesystems[1482]: Found usr Sep 9 00:02:33.143210 extend-filesystems[1482]: Found vda4 Sep 9 00:02:33.143210 extend-filesystems[1482]: Found vda6 Sep 9 00:02:33.143210 extend-filesystems[1482]: Found vda7 Sep 9 00:02:33.143210 extend-filesystems[1482]: Found vda9 Sep 9 00:02:33.143210 extend-filesystems[1482]: Checking size of /dev/vda9 Sep 9 00:02:33.206043 extend-filesystems[1482]: Resized partition /dev/vda9 Sep 9 00:02:33.191698 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:02:33.216233 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:02:33.217464 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:02:33.219675 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1404) Sep 9 00:02:33.232117 extend-filesystems[1502]: resize2fs 1.47.1 (20-May-2024) Sep 9 00:02:33.285519 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:02:33.301670 update_engine[1501]: I20250909 00:02:33.298045 1501 main.cc:92] Flatcar Update Engine starting Sep 9 00:02:33.301670 update_engine[1501]: I20250909 00:02:33.299960 1501 update_check_scheduler.cc:74] Next update check in 5m24s Sep 9 00:02:33.303632 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:02:33.306264 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:02:33.388311 jq[1504]: true Sep 9 00:02:33.391703 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 00:02:33.417767 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:02:33.418178 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:02:33.418633 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:02:33.418983 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:02:33.437681 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:02:33.440937 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:02:33.441332 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:02:33.461139 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:02:33.467715 jq[1507]: true Sep 9 00:02:33.503099 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:02:33.507503 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:02:33.507556 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:02:33.511120 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:02:33.511284 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:02:33.532049 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:02:33.541198 systemd-networkd[1436]: eth0: Gained IPv6LL Sep 9 00:02:33.548112 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:02:33.556362 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:02:33.570131 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:02:33.582321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:02:33.598855 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:02:33.620763 systemd-logind[1496]: Watching system buttons on /dev/input/event1 (Power Button) Sep 9 00:02:33.620802 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:02:33.622180 systemd-logind[1496]: New seat seat0. Sep 9 00:02:33.628179 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:02:33.661753 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:02:33.662226 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:02:33.670364 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:02:33.756461 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:02:33.794920 tar[1506]: linux-amd64/helm Sep 9 00:02:33.811708 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:02:33.815682 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:02:33.874718 extend-filesystems[1502]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:02:33.874718 extend-filesystems[1502]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:02:33.874718 extend-filesystems[1502]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:02:33.980111 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Sep 9 00:02:33.981440 bash[1532]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:02:33.981608 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:02:33.887660 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:02:33.888062 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:02:33.984576 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:02:33.999823 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:02:34.056192 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:02:34.070256 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:02:34.076865 systemd[1]: Started sshd@0-10.0.0.143:22-10.0.0.1:48118.service - OpenSSH per-connection server daemon (10.0.0.1:48118). Sep 9 00:02:34.081974 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:02:34.115177 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:02:34.115603 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:02:34.158432 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:02:34.198618 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:02:34.239204 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:02:34.249175 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:02:34.250823 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:02:34.321573 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 48118 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:02:34.324034 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:02:34.334364 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:02:34.345993 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:02:34.360637 systemd-logind[1496]: New session 1 of user core. Sep 9 00:02:34.526015 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:02:34.540013 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:02:34.574470 (systemd)[1587]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:02:34.578279 systemd-logind[1496]: New session c1 of user core. Sep 9 00:02:34.650404 containerd[1508]: time="2025-09-09T00:02:34.649845850Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 9 00:02:34.689676 containerd[1508]: time="2025-09-09T00:02:34.689429804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:02:34.692677 containerd[1508]: time="2025-09-09T00:02:34.692625267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:02:34.692919 containerd[1508]: time="2025-09-09T00:02:34.692729853Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:02:34.692919 containerd[1508]: time="2025-09-09T00:02:34.692752235Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:02:34.693066 containerd[1508]: time="2025-09-09T00:02:34.693046837Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 00:02:34.693147 containerd[1508]: time="2025-09-09T00:02:34.693131987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 00:02:34.693290 containerd[1508]: time="2025-09-09T00:02:34.693269615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:02:34.693346 containerd[1508]: time="2025-09-09T00:02:34.693334216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:02:34.693706 containerd[1508]: time="2025-09-09T00:02:34.693685455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:02:34.694214 containerd[1508]: time="2025-09-09T00:02:34.693756568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:02:34.694214 containerd[1508]: time="2025-09-09T00:02:34.693778038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:02:34.694214 containerd[1508]: time="2025-09-09T00:02:34.693787646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:02:34.694214 containerd[1508]: time="2025-09-09T00:02:34.693921217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:02:34.694214 containerd[1508]: time="2025-09-09T00:02:34.694182006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:02:34.694525 containerd[1508]: time="2025-09-09T00:02:34.694505893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:02:34.694588 containerd[1508]: time="2025-09-09T00:02:34.694571647Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:02:34.694794 containerd[1508]: time="2025-09-09T00:02:34.694771872Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:02:34.694933 containerd[1508]: time="2025-09-09T00:02:34.694916493Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:02:34.809560 systemd[1587]: Queued start job for default target default.target. Sep 9 00:02:34.824708 systemd[1587]: Created slice app.slice - User Application Slice. Sep 9 00:02:34.824738 systemd[1587]: Reached target paths.target - Paths. Sep 9 00:02:34.824895 systemd[1587]: Reached target timers.target - Timers. Sep 9 00:02:34.827075 systemd[1587]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:02:34.884687 systemd[1587]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:02:34.884895 systemd[1587]: Reached target sockets.target - Sockets. Sep 9 00:02:34.884966 systemd[1587]: Reached target basic.target - Basic System. Sep 9 00:02:34.885027 systemd[1587]: Reached target default.target - Main User Target. Sep 9 00:02:34.885074 systemd[1587]: Startup finished in 249ms. Sep 9 00:02:34.885339 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:02:34.901961 tar[1506]: linux-amd64/LICENSE Sep 9 00:02:34.906875 tar[1506]: linux-amd64/README.md Sep 9 00:02:34.906924 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:02:34.922260 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:02:34.984989 systemd[1]: Started sshd@1-10.0.0.143:22-10.0.0.1:48134.service - OpenSSH per-connection server daemon (10.0.0.1:48134). Sep 9 00:02:35.034851 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 48134 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:02:35.036557 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:02:35.042110 systemd-logind[1496]: New session 2 of user core. Sep 9 00:02:35.054807 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:02:35.125804 sshd[1606]: Connection closed by 10.0.0.1 port 48134 Sep 9 00:02:35.126256 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Sep 9 00:02:35.142858 systemd[1]: sshd@1-10.0.0.143:22-10.0.0.1:48134.service: Deactivated successfully. Sep 9 00:02:35.145232 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:02:35.147159 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:02:35.161977 systemd[1]: Started sshd@2-10.0.0.143:22-10.0.0.1:48136.service - OpenSSH per-connection server daemon (10.0.0.1:48136). Sep 9 00:02:35.165008 containerd[1508]: time="2025-09-09T00:02:35.164569260Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:02:35.164928 systemd-logind[1496]: Removed session 2. Sep 9 00:02:35.167056 containerd[1508]: time="2025-09-09T00:02:35.167017471Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:02:35.167796 containerd[1508]: time="2025-09-09T00:02:35.167156091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 00:02:35.167796 containerd[1508]: time="2025-09-09T00:02:35.167187771Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 00:02:35.167796 containerd[1508]: time="2025-09-09T00:02:35.167214952Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:02:35.167796 containerd[1508]: time="2025-09-09T00:02:35.167424645Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:02:35.168254 containerd[1508]: time="2025-09-09T00:02:35.168139285Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:02:35.168536 containerd[1508]: time="2025-09-09T00:02:35.168508247Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 00:02:35.168570 containerd[1508]: time="2025-09-09T00:02:35.168542621Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 00:02:35.168570 containerd[1508]: time="2025-09-09T00:02:35.168563831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 00:02:35.168658 containerd[1508]: time="2025-09-09T00:02:35.168584590Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:02:35.168658 containerd[1508]: time="2025-09-09T00:02:35.168603305Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:02:35.168658 containerd[1508]: time="2025-09-09T00:02:35.168621750Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:02:35.168980 containerd[1508]: time="2025-09-09T00:02:35.168944014Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:02:35.169091 containerd[1508]: time="2025-09-09T00:02:35.169066314Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:02:35.169161 containerd[1508]: time="2025-09-09T00:02:35.169093745Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:02:35.169161 containerd[1508]: time="2025-09-09T00:02:35.169117980Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:02:35.169161 containerd[1508]: time="2025-09-09T00:02:35.169135333Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:02:35.169228 containerd[1508]: time="2025-09-09T00:02:35.169162744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169228 containerd[1508]: time="2025-09-09T00:02:35.169181480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169228 containerd[1508]: time="2025-09-09T00:02:35.169197750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169228 containerd[1508]: time="2025-09-09T00:02:35.169214301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169318 containerd[1508]: time="2025-09-09T00:02:35.169230271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169318 containerd[1508]: time="2025-09-09T00:02:35.169248696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169318 containerd[1508]: time="2025-09-09T00:02:35.169264806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169318 containerd[1508]: time="2025-09-09T00:02:35.169284974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169318 containerd[1508]: time="2025-09-09T00:02:35.169302557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169418 containerd[1508]: time="2025-09-09T00:02:35.169321312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169418 containerd[1508]: time="2025-09-09T00:02:35.169337562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169418 containerd[1508]: time="2025-09-09T00:02:35.169353382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169418 containerd[1508]: time="2025-09-09T00:02:35.169369522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169418 containerd[1508]: time="2025-09-09T00:02:35.169388438Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 00:02:35.169513 containerd[1508]: time="2025-09-09T00:02:35.169419286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169513 containerd[1508]: time="2025-09-09T00:02:35.169437279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169513 containerd[1508]: time="2025-09-09T00:02:35.169451967Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:02:35.169572 containerd[1508]: time="2025-09-09T00:02:35.169536135Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:02:35.169697 containerd[1508]: time="2025-09-09T00:02:35.169562945Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 00:02:35.169738 containerd[1508]: time="2025-09-09T00:02:35.169700553Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:02:35.169738 containerd[1508]: time="2025-09-09T00:02:35.169722945Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 00:02:35.169778 containerd[1508]: time="2025-09-09T00:02:35.169752811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.169778 containerd[1508]: time="2025-09-09T00:02:35.169772227Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 00:02:35.169816 containerd[1508]: time="2025-09-09T00:02:35.169787887Z" level=info msg="NRI interface is disabled by configuration." Sep 9 00:02:35.169816 containerd[1508]: time="2025-09-09T00:02:35.169802975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:02:35.170276 containerd[1508]: time="2025-09-09T00:02:35.170202003Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:02:35.170409 containerd[1508]: time="2025-09-09T00:02:35.170275842Z" level=info msg="Connect containerd service" Sep 9 00:02:35.170409 containerd[1508]: time="2025-09-09T00:02:35.170371261Z" level=info msg="using legacy CRI server" Sep 9 00:02:35.170409 containerd[1508]: time="2025-09-09T00:02:35.170385518Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:02:35.170609 containerd[1508]: time="2025-09-09T00:02:35.170586064Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:02:35.171677 containerd[1508]: time="2025-09-09T00:02:35.171626886Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:02:35.171859 containerd[1508]: time="2025-09-09T00:02:35.171797215Z" level=info msg="Start subscribing containerd event" Sep 9 00:02:35.171897 containerd[1508]: time="2025-09-09T00:02:35.171884238Z" level=info msg="Start recovering state" Sep 9 00:02:35.171980 containerd[1508]: time="2025-09-09T00:02:35.171960491Z" level=info msg="Start event monitor" Sep 9 00:02:35.172016 containerd[1508]: time="2025-09-09T00:02:35.171990007Z" level=info msg="Start snapshots syncer" Sep 9 00:02:35.172016 containerd[1508]: time="2025-09-09T00:02:35.172002871Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:02:35.172016 containerd[1508]: time="2025-09-09T00:02:35.172014893Z" level=info msg="Start streaming server" Sep 9 00:02:35.172548 containerd[1508]: time="2025-09-09T00:02:35.172524168Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:02:35.172613 containerd[1508]: time="2025-09-09T00:02:35.172594009Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:02:35.172741 containerd[1508]: time="2025-09-09T00:02:35.172699778Z" level=info msg="containerd successfully booted in 0.524549s" Sep 9 00:02:35.172810 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:02:35.209296 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 48136 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:02:35.211204 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:02:35.215735 systemd-logind[1496]: New session 3 of user core. Sep 9 00:02:35.302903 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:02:35.358592 sshd[1615]: Connection closed by 10.0.0.1 port 48136 Sep 9 00:02:35.358923 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Sep 9 00:02:35.363427 systemd[1]: sshd@2-10.0.0.143:22-10.0.0.1:48136.service: Deactivated successfully. Sep 9 00:02:35.365417 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:02:35.366094 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:02:35.366951 systemd-logind[1496]: Removed session 3. Sep 9 00:02:35.852309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:02:35.853996 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:02:35.855360 systemd[1]: Startup finished in 1.069s (kernel) + 8.257s (initrd) + 6.621s (userspace) = 15.947s. Sep 9 00:02:35.900063 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:02:36.502844 kubelet[1625]: E0909 00:02:36.502755 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:02:36.507156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:02:36.507411 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:02:36.507895 systemd[1]: kubelet.service: Consumed 1.955s CPU time, 267.9M memory peak. Sep 9 00:02:45.375486 systemd[1]: Started sshd@3-10.0.0.143:22-10.0.0.1:44446.service - OpenSSH per-connection server daemon (10.0.0.1:44446). Sep 9 00:02:45.419454 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 44446 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:02:45.421236 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:02:45.426386 systemd-logind[1496]: New session 4 of user core. Sep 9 00:02:45.435826 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:02:45.490769 sshd[1640]: Connection closed by 10.0.0.1 port 44446 Sep 9 00:02:45.491189 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Sep 9 00:02:45.504933 systemd[1]: sshd@3-10.0.0.143:22-10.0.0.1:44446.service: Deactivated successfully. Sep 9 00:02:45.507579 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:02:45.509535 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:02:45.523096 systemd[1]: Started sshd@4-10.0.0.143:22-10.0.0.1:44456.service - OpenSSH per-connection server daemon (10.0.0.1:44456). Sep 9 00:02:45.524259 systemd-logind[1496]: Removed session 4. Sep 9 00:02:45.564168 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 44456 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:02:45.566016 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:02:45.571274 systemd-logind[1496]: New session 5 of user core. Sep 9 00:02:45.581899 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:02:45.632895 sshd[1648]: Connection closed by 10.0.0.1 port 44456 Sep 9 00:02:45.633203 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Sep 9 00:02:45.646791 systemd[1]: sshd@4-10.0.0.143:22-10.0.0.1:44456.service: Deactivated successfully. Sep 9 00:02:45.648844 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:02:45.650814 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:02:45.661961 systemd[1]: Started sshd@5-10.0.0.143:22-10.0.0.1:44466.service - OpenSSH per-connection server daemon (10.0.0.1:44466). Sep 9 00:02:45.663140 systemd-logind[1496]: Removed session 5. Sep 9 00:02:45.701344 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 44466 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:02:45.703050 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:02:45.707977 systemd-logind[1496]: New session 6 of user core. Sep 9 00:02:45.718780 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:02:45.773855 sshd[1656]: Connection closed by 10.0.0.1 port 44466 Sep 9 00:02:45.774313 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Sep 9 00:02:45.790600 systemd[1]: sshd@5-10.0.0.143:22-10.0.0.1:44466.service: Deactivated successfully. Sep 9 00:02:45.792817 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:02:45.794508 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:02:45.806878 systemd[1]: Started sshd@6-10.0.0.143:22-10.0.0.1:44468.service - OpenSSH per-connection server daemon (10.0.0.1:44468). Sep 9 00:02:45.807911 systemd-logind[1496]: Removed session 6. Sep 9 00:02:45.846365 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 44468 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:02:45.848116 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:02:45.852984 systemd-logind[1496]: New session 7 of user core. Sep 9 00:02:45.862779 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:02:45.922329 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:02:45.922805 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:02:46.071970 sudo[1665]: pam_unix(sudo:session): session closed for user root Sep 9 00:02:46.073982 sshd[1664]: Connection closed by 10.0.0.1 port 44468 Sep 9 00:02:46.074469 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Sep 9 00:02:46.087981 systemd[1]: sshd@6-10.0.0.143:22-10.0.0.1:44468.service: Deactivated successfully. Sep 9 00:02:46.089978 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:02:46.091578 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:02:46.107984 systemd[1]: Started sshd@7-10.0.0.143:22-10.0.0.1:44482.service - OpenSSH per-connection server daemon (10.0.0.1:44482). Sep 9 00:02:46.108973 systemd-logind[1496]: Removed session 7. Sep 9 00:02:46.146862 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 44482 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:02:46.148516 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:02:46.152880 systemd-logind[1496]: New session 8 of user core. Sep 9 00:02:46.163766 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:02:46.217937 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:02:46.218277 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:02:46.222128 sudo[1675]: pam_unix(sudo:session): session closed for user root Sep 9 00:02:46.228509 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:02:46.228918 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:02:46.253022 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:02:46.283365 augenrules[1697]: No rules Sep 9 00:02:46.285120 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:02:46.285413 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:02:46.286509 sudo[1674]: pam_unix(sudo:session): session closed for user root Sep 9 00:02:46.288020 sshd[1673]: Connection closed by 10.0.0.1 port 44482 Sep 9 00:02:46.288375 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Sep 9 00:02:46.300773 systemd[1]: sshd@7-10.0.0.143:22-10.0.0.1:44482.service: Deactivated successfully. Sep 9 00:02:46.303413 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:02:46.305398 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:02:46.318011 systemd[1]: Started sshd@8-10.0.0.143:22-10.0.0.1:44488.service - OpenSSH per-connection server daemon (10.0.0.1:44488). Sep 9 00:02:46.318940 systemd-logind[1496]: Removed session 8. Sep 9 00:02:46.356804 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 44488 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:02:46.358189 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:02:46.362717 systemd-logind[1496]: New session 9 of user core. Sep 9 00:02:46.372772 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:02:46.426003 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:02:46.426440 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:02:46.706699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:02:46.722890 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:02:46.722998 (dockerd)[1728]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:02:46.724218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:02:46.937072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:02:46.942244 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:02:46.994380 kubelet[1743]: E0909 00:02:46.994279 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:02:47.001585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:02:47.001815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:02:47.002203 systemd[1]: kubelet.service: Consumed 269ms CPU time, 110.2M memory peak. Sep 9 00:02:47.013125 dockerd[1728]: time="2025-09-09T00:02:47.013048298Z" level=info msg="Starting up" Sep 9 00:02:48.832047 dockerd[1728]: time="2025-09-09T00:02:48.831860850Z" level=info msg="Loading containers: start." Sep 9 00:02:49.625913 kernel: Initializing XFRM netlink socket Sep 9 00:02:49.981547 systemd-networkd[1436]: docker0: Link UP Sep 9 00:02:50.041966 dockerd[1728]: time="2025-09-09T00:02:50.041377716Z" level=info msg="Loading containers: done." Sep 9 00:02:50.077726 dockerd[1728]: time="2025-09-09T00:02:50.077563878Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:02:50.077726 dockerd[1728]: time="2025-09-09T00:02:50.077719630Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 9 00:02:50.078277 dockerd[1728]: time="2025-09-09T00:02:50.077891863Z" level=info msg="Daemon has completed initialization" Sep 9 00:02:50.313128 dockerd[1728]: time="2025-09-09T00:02:50.311474123Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:02:50.311798 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:02:51.556950 containerd[1508]: time="2025-09-09T00:02:51.556880074Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 00:02:55.945385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870618028.mount: Deactivated successfully. Sep 9 00:02:57.220209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:02:57.229863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:02:57.419457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:02:57.424853 (kubelet)[1954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:02:58.014937 kubelet[1954]: E0909 00:02:58.014854 1954 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:02:58.020669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:02:58.020930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:02:58.021434 systemd[1]: kubelet.service: Consumed 257ms CPU time, 112.9M memory peak. Sep 9 00:03:04.144191 containerd[1508]: time="2025-09-09T00:03:04.144101617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:04.180730 containerd[1508]: time="2025-09-09T00:03:04.180632185Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 9 00:03:04.200013 containerd[1508]: time="2025-09-09T00:03:04.199942529Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:04.220063 containerd[1508]: time="2025-09-09T00:03:04.219952244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:04.222412 containerd[1508]: time="2025-09-09T00:03:04.222338870Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 12.665386401s" Sep 9 00:03:04.222515 containerd[1508]: time="2025-09-09T00:03:04.222485275Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 00:03:04.225383 containerd[1508]: time="2025-09-09T00:03:04.225348394Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 00:03:07.350694 containerd[1508]: time="2025-09-09T00:03:07.350600259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:07.370261 containerd[1508]: time="2025-09-09T00:03:07.370197158Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 9 00:03:07.381841 containerd[1508]: time="2025-09-09T00:03:07.381756332Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:07.396321 containerd[1508]: time="2025-09-09T00:03:07.396264229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:07.397727 containerd[1508]: time="2025-09-09T00:03:07.397637061Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 3.172245195s" Sep 9 00:03:07.397727 containerd[1508]: time="2025-09-09T00:03:07.397723276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 00:03:07.398737 containerd[1508]: time="2025-09-09T00:03:07.398395711Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 00:03:08.220470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:03:08.238008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:03:08.427943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:03:08.438497 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:03:08.754267 kubelet[2024]: E0909 00:03:08.754066 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:03:08.758660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:03:08.758900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:03:08.759339 systemd[1]: kubelet.service: Consumed 281ms CPU time, 113.2M memory peak. Sep 9 00:03:12.920269 containerd[1508]: time="2025-09-09T00:03:12.920173967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:12.958051 containerd[1508]: time="2025-09-09T00:03:12.957945634Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 9 00:03:12.999835 containerd[1508]: time="2025-09-09T00:03:12.999761606Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:13.072148 containerd[1508]: time="2025-09-09T00:03:13.072093809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:13.073435 containerd[1508]: time="2025-09-09T00:03:13.073397114Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 5.67495749s" Sep 9 00:03:13.073435 containerd[1508]: time="2025-09-09T00:03:13.073428624Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 00:03:13.074038 containerd[1508]: time="2025-09-09T00:03:13.073998988Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 00:03:16.638544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3337300667.mount: Deactivated successfully. Sep 9 00:03:18.938527 update_engine[1501]: I20250909 00:03:18.938356 1501 update_attempter.cc:509] Updating boot flags... Sep 9 00:03:18.970103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 00:03:18.982817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:03:19.029713 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2054) Sep 9 00:03:19.158245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:03:19.162536 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:03:19.881627 kubelet[2065]: E0909 00:03:19.881565 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:03:19.885733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:03:19.885983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:03:19.886368 systemd[1]: kubelet.service: Consumed 228ms CPU time, 111.4M memory peak. Sep 9 00:03:20.095707 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2055) Sep 9 00:03:20.365673 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2055) Sep 9 00:03:20.506213 containerd[1508]: time="2025-09-09T00:03:20.506128201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:20.530663 containerd[1508]: time="2025-09-09T00:03:20.530574627Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 9 00:03:20.570134 containerd[1508]: time="2025-09-09T00:03:20.570076255Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:20.647467 containerd[1508]: time="2025-09-09T00:03:20.647305907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:20.648300 containerd[1508]: time="2025-09-09T00:03:20.648238530Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 7.5741855s" Sep 9 00:03:20.648300 containerd[1508]: time="2025-09-09T00:03:20.648295017Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 00:03:20.648993 containerd[1508]: time="2025-09-09T00:03:20.648961298Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:03:25.260371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount122061838.mount: Deactivated successfully. Sep 9 00:03:29.970186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 9 00:03:29.979906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:03:30.150594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:03:30.154958 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:03:30.439884 kubelet[2124]: E0909 00:03:30.439625 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:03:30.444421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:03:30.444673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:03:30.445080 systemd[1]: kubelet.service: Consumed 251ms CPU time, 116.7M memory peak. Sep 9 00:03:31.689882 containerd[1508]: time="2025-09-09T00:03:31.689786673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:31.705843 containerd[1508]: time="2025-09-09T00:03:31.705774546Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 00:03:31.719242 containerd[1508]: time="2025-09-09T00:03:31.719197669Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:31.728943 containerd[1508]: time="2025-09-09T00:03:31.728881752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:31.730059 containerd[1508]: time="2025-09-09T00:03:31.730015497Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 11.081013954s" Sep 9 00:03:31.730059 containerd[1508]: time="2025-09-09T00:03:31.730057957Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 00:03:31.730772 containerd[1508]: time="2025-09-09T00:03:31.730741824Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:03:34.233448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3139072450.mount: Deactivated successfully. Sep 9 00:03:34.380998 containerd[1508]: time="2025-09-09T00:03:34.380930324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:34.398041 containerd[1508]: time="2025-09-09T00:03:34.397947722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:03:34.411536 containerd[1508]: time="2025-09-09T00:03:34.411487297Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:34.425958 containerd[1508]: time="2025-09-09T00:03:34.425891399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:34.426545 containerd[1508]: time="2025-09-09T00:03:34.426505404Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.695729426s" Sep 9 00:03:34.426545 containerd[1508]: time="2025-09-09T00:03:34.426535741Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:03:34.427253 containerd[1508]: time="2025-09-09T00:03:34.427197967Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 00:03:37.817001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2325038961.mount: Deactivated successfully. Sep 9 00:03:40.414465 containerd[1508]: time="2025-09-09T00:03:40.414391575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:40.415272 containerd[1508]: time="2025-09-09T00:03:40.415194585Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 9 00:03:40.416591 containerd[1508]: time="2025-09-09T00:03:40.416534774Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:40.419450 containerd[1508]: time="2025-09-09T00:03:40.419413514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:40.420633 containerd[1508]: time="2025-09-09T00:03:40.420593993Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.99335514s" Sep 9 00:03:40.420633 containerd[1508]: time="2025-09-09T00:03:40.420629399Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 00:03:40.470111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 9 00:03:40.482829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:03:40.656232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:03:40.660792 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:03:40.696106 kubelet[2220]: E0909 00:03:40.695949 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:03:40.700424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:03:40.700712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:03:40.701170 systemd[1]: kubelet.service: Consumed 212ms CPU time, 110.7M memory peak. Sep 9 00:03:43.142981 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:03:43.143205 systemd[1]: kubelet.service: Consumed 212ms CPU time, 110.7M memory peak. Sep 9 00:03:43.156869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:03:43.182061 systemd[1]: Reload requested from client PID 2250 ('systemctl') (unit session-9.scope)... Sep 9 00:03:43.182082 systemd[1]: Reloading... Sep 9 00:03:43.258676 zram_generator::config[2297]: No configuration found. Sep 9 00:03:43.547933 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:03:43.653533 systemd[1]: Reloading finished in 470 ms. Sep 9 00:03:43.712328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:03:43.716964 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:03:43.719914 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:03:43.721275 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:03:43.721595 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:03:43.721672 systemd[1]: kubelet.service: Consumed 172ms CPU time, 99.3M memory peak. Sep 9 00:03:43.734966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:03:43.894787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:03:43.900071 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:03:43.933019 kubelet[2346]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:03:43.933019 kubelet[2346]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:03:43.933019 kubelet[2346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:03:43.933466 kubelet[2346]: I0909 00:03:43.933304 2346 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:03:44.086258 kubelet[2346]: I0909 00:03:44.086212 2346 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:03:44.086258 kubelet[2346]: I0909 00:03:44.086243 2346 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:03:44.086497 kubelet[2346]: I0909 00:03:44.086479 2346 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:03:44.106097 kubelet[2346]: E0909 00:03:44.106055 2346 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:44.107063 kubelet[2346]: I0909 00:03:44.107036 2346 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:03:44.111717 kubelet[2346]: E0909 00:03:44.111679 2346 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:03:44.111717 kubelet[2346]: I0909 00:03:44.111716 2346 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:03:44.118065 kubelet[2346]: I0909 00:03:44.118047 2346 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:03:44.118655 kubelet[2346]: I0909 00:03:44.118622 2346 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:03:44.118830 kubelet[2346]: I0909 00:03:44.118795 2346 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:03:44.118992 kubelet[2346]: I0909 00:03:44.118825 2346 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:03:44.119130 kubelet[2346]: I0909 00:03:44.119013 2346 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:03:44.119130 kubelet[2346]: I0909 00:03:44.119022 2346 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:03:44.119176 kubelet[2346]: I0909 00:03:44.119154 2346 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:03:44.121345 kubelet[2346]: I0909 00:03:44.121111 2346 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:03:44.121345 kubelet[2346]: I0909 00:03:44.121139 2346 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:03:44.121345 kubelet[2346]: I0909 00:03:44.121176 2346 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:03:44.121345 kubelet[2346]: I0909 00:03:44.121200 2346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:03:44.124543 kubelet[2346]: I0909 00:03:44.124528 2346 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 9 00:03:44.126371 kubelet[2346]: I0909 00:03:44.126353 2346 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:03:44.126916 kubelet[2346]: W0909 00:03:44.126768 2346 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Sep 9 00:03:44.126916 kubelet[2346]: E0909 00:03:44.126844 2346 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:44.127024 kubelet[2346]: W0909 00:03:44.127010 2346 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:03:44.127171 kubelet[2346]: W0909 00:03:44.127121 2346 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Sep 9 00:03:44.127208 kubelet[2346]: E0909 00:03:44.127181 2346 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:44.129120 kubelet[2346]: I0909 00:03:44.129100 2346 server.go:1274] "Started kubelet" Sep 9 00:03:44.132546 kubelet[2346]: I0909 00:03:44.132469 2346 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:03:44.134551 kubelet[2346]: I0909 00:03:44.132865 2346 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:03:44.134551 kubelet[2346]: I0909 00:03:44.132914 2346 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:03:44.134551 kubelet[2346]: I0909 00:03:44.133267 2346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:03:44.134551 kubelet[2346]: I0909 00:03:44.133783 2346 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:03:44.135487 kubelet[2346]: E0909 00:03:44.133414 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.143:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.143:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863745d67ee420c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:03:44.129073676 +0000 UTC m=+0.225140065,LastTimestamp:2025-09-09 00:03:44.129073676 +0000 UTC m=+0.225140065,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:03:44.135487 kubelet[2346]: I0909 00:03:44.134979 2346 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:03:44.135487 kubelet[2346]: E0909 00:03:44.135142 2346 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:03:44.135487 kubelet[2346]: I0909 00:03:44.135461 2346 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:03:44.135623 kubelet[2346]: I0909 00:03:44.135588 2346 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:03:44.135702 kubelet[2346]: I0909 00:03:44.135677 2346 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:03:44.136050 kubelet[2346]: W0909 00:03:44.136004 2346 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Sep 9 00:03:44.136091 kubelet[2346]: E0909 00:03:44.136060 2346 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:44.136197 kubelet[2346]: I0909 00:03:44.136182 2346 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:03:44.136307 kubelet[2346]: E0909 00:03:44.136281 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:44.136371 kubelet[2346]: I0909 00:03:44.136355 2346 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:03:44.136771 kubelet[2346]: E0909 00:03:44.136357 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="200ms" Sep 9 00:03:44.137705 kubelet[2346]: I0909 00:03:44.137686 2346 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:03:44.152089 kubelet[2346]: I0909 00:03:44.151986 2346 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:03:44.152089 kubelet[2346]: I0909 00:03:44.152005 2346 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:03:44.152089 kubelet[2346]: I0909 00:03:44.152029 2346 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:03:44.153309 kubelet[2346]: I0909 00:03:44.153272 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:03:44.154676 kubelet[2346]: I0909 00:03:44.154653 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:03:44.154726 kubelet[2346]: I0909 00:03:44.154679 2346 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:03:44.154726 kubelet[2346]: I0909 00:03:44.154703 2346 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:03:44.154773 kubelet[2346]: E0909 00:03:44.154741 2346 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:03:44.155573 kubelet[2346]: W0909 00:03:44.155530 2346 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Sep 9 00:03:44.155956 kubelet[2346]: E0909 00:03:44.155924 2346 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:44.237177 kubelet[2346]: E0909 00:03:44.237145 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:44.255325 kubelet[2346]: E0909 00:03:44.255296 2346 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:03:44.337770 kubelet[2346]: E0909 00:03:44.337720 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:44.338112 kubelet[2346]: E0909 00:03:44.338073 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="400ms" Sep 9 00:03:44.438483 kubelet[2346]: E0909 00:03:44.438392 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:44.455610 kubelet[2346]: E0909 00:03:44.455579 2346 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:03:44.539134 kubelet[2346]: E0909 00:03:44.539083 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:44.640219 kubelet[2346]: E0909 00:03:44.640164 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:44.738951 kubelet[2346]: E0909 00:03:44.738910 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="800ms" Sep 9 00:03:44.741089 kubelet[2346]: E0909 00:03:44.741064 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:44.841589 kubelet[2346]: E0909 00:03:44.841527 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:44.856711 kubelet[2346]: E0909 00:03:44.856680 2346 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:03:44.942238 kubelet[2346]: E0909 00:03:44.942204 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:45.042896 kubelet[2346]: E0909 00:03:45.042764 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:45.143526 kubelet[2346]: E0909 00:03:45.143464 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:45.244013 kubelet[2346]: E0909 00:03:45.243941 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:45.344608 kubelet[2346]: E0909 00:03:45.344472 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:45.425063 kubelet[2346]: W0909 00:03:45.425009 2346 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Sep 9 00:03:45.425063 kubelet[2346]: E0909 00:03:45.425055 2346 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:45.435239 kubelet[2346]: I0909 00:03:45.435199 2346 policy_none.go:49] "None policy: Start" Sep 9 00:03:45.436082 kubelet[2346]: I0909 00:03:45.436046 2346 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:03:45.436082 kubelet[2346]: I0909 00:03:45.436077 2346 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:03:45.445605 kubelet[2346]: E0909 00:03:45.445571 2346 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:03:45.485020 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:03:45.498957 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:03:45.502130 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:03:45.514701 kubelet[2346]: I0909 00:03:45.514669 2346 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:03:45.514968 kubelet[2346]: I0909 00:03:45.514938 2346 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:03:45.515020 kubelet[2346]: I0909 00:03:45.514956 2346 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:03:45.515480 kubelet[2346]: I0909 00:03:45.515215 2346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:03:45.516814 kubelet[2346]: E0909 00:03:45.516773 2346 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:03:45.540301 kubelet[2346]: E0909 00:03:45.540235 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="1.6s" Sep 9 00:03:45.604487 kubelet[2346]: W0909 00:03:45.604313 2346 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Sep 9 00:03:45.604487 kubelet[2346]: E0909 00:03:45.604364 2346 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:45.617157 kubelet[2346]: I0909 00:03:45.617103 2346 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:03:45.617515 kubelet[2346]: E0909 00:03:45.617469 2346 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Sep 9 00:03:45.667851 systemd[1]: Created slice kubepods-burstable-poda7869c9f1549ddbee8a4185f848ae387.slice - libcontainer container kubepods-burstable-poda7869c9f1549ddbee8a4185f848ae387.slice. Sep 9 00:03:45.682027 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 9 00:03:45.683697 kubelet[2346]: W0909 00:03:45.683634 2346 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Sep 9 00:03:45.683786 kubelet[2346]: E0909 00:03:45.683707 2346 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:45.686571 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 9 00:03:45.713564 kubelet[2346]: W0909 00:03:45.713485 2346 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Sep 9 00:03:45.713692 kubelet[2346]: E0909 00:03:45.713567 2346 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:45.746299 kubelet[2346]: I0909 00:03:45.746250 2346 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:03:45.746299 kubelet[2346]: I0909 00:03:45.746298 2346 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:03:45.746426 kubelet[2346]: I0909 00:03:45.746329 2346 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7869c9f1549ddbee8a4185f848ae387-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7869c9f1549ddbee8a4185f848ae387\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:03:45.746426 kubelet[2346]: I0909 00:03:45.746354 2346 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7869c9f1549ddbee8a4185f848ae387-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a7869c9f1549ddbee8a4185f848ae387\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:03:45.746426 kubelet[2346]: I0909 00:03:45.746390 2346 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:03:45.746426 kubelet[2346]: I0909 00:03:45.746414 2346 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:03:45.746552 kubelet[2346]: I0909 00:03:45.746436 2346 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:03:45.746552 kubelet[2346]: I0909 00:03:45.746457 2346 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7869c9f1549ddbee8a4185f848ae387-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7869c9f1549ddbee8a4185f848ae387\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:03:45.746552 kubelet[2346]: I0909 00:03:45.746479 2346 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:03:45.819565 kubelet[2346]: I0909 00:03:45.819543 2346 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:03:45.819878 kubelet[2346]: E0909 00:03:45.819854 2346 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Sep 9 00:03:45.979979 kubelet[2346]: E0909 00:03:45.979943 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:45.980753 containerd[1508]: time="2025-09-09T00:03:45.980712435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a7869c9f1549ddbee8a4185f848ae387,Namespace:kube-system,Attempt:0,}" Sep 9 00:03:45.984851 kubelet[2346]: E0909 00:03:45.984830 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:45.985177 containerd[1508]: time="2025-09-09T00:03:45.985149429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 00:03:45.989394 kubelet[2346]: E0909 00:03:45.989374 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:45.989746 containerd[1508]: time="2025-09-09T00:03:45.989625307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 00:03:46.222210 kubelet[2346]: I0909 00:03:46.222158 2346 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:03:46.222670 kubelet[2346]: E0909 00:03:46.222613 2346 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Sep 9 00:03:46.262843 kubelet[2346]: E0909 00:03:46.262704 2346 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:47.024010 kubelet[2346]: I0909 00:03:47.023967 2346 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:03:47.024429 kubelet[2346]: E0909 00:03:47.024323 2346 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Sep 9 00:03:47.141154 kubelet[2346]: E0909 00:03:47.141109 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="3.2s" Sep 9 00:03:47.789330 kubelet[2346]: E0909 00:03:47.789175 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.143:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.143:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863745d67ee420c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:03:44.129073676 +0000 UTC m=+0.225140065,LastTimestamp:2025-09-09 00:03:44.129073676 +0000 UTC m=+0.225140065,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:03:47.914217 kubelet[2346]: W0909 00:03:47.914151 2346 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Sep 9 00:03:47.914217 kubelet[2346]: E0909 00:03:47.914215 2346 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:48.184901 kubelet[2346]: W0909 00:03:48.184712 2346 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Sep 9 00:03:48.184901 kubelet[2346]: E0909 00:03:48.184789 2346 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:48.324984 kubelet[2346]: W0909 00:03:48.324889 2346 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Sep 9 00:03:48.324984 kubelet[2346]: E0909 00:03:48.324976 2346 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:48.626070 kubelet[2346]: I0909 00:03:48.626015 2346 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:03:48.626600 kubelet[2346]: E0909 00:03:48.626543 2346 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Sep 9 00:03:48.848287 kubelet[2346]: W0909 00:03:48.848215 2346 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Sep 9 00:03:48.848405 kubelet[2346]: E0909 00:03:48.848299 2346 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:49.027717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1857160364.mount: Deactivated successfully. Sep 9 00:03:49.283207 containerd[1508]: time="2025-09-09T00:03:49.283042865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:03:49.500235 containerd[1508]: time="2025-09-09T00:03:49.500116342Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 9 00:03:49.617958 containerd[1508]: time="2025-09-09T00:03:49.617824441Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:03:49.663176 containerd[1508]: time="2025-09-09T00:03:49.663138408Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:03:49.723179 containerd[1508]: time="2025-09-09T00:03:49.723137628Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:03:49.763864 containerd[1508]: time="2025-09-09T00:03:49.763762808Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:03:49.804843 containerd[1508]: time="2025-09-09T00:03:49.804796669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:03:49.838780 containerd[1508]: time="2025-09-09T00:03:49.838751254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:03:49.841558 containerd[1508]: time="2025-09-09T00:03:49.841528138Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.851826679s" Sep 9 00:03:49.842360 containerd[1508]: time="2025-09-09T00:03:49.842335724Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.86150631s" Sep 9 00:03:49.930991 containerd[1508]: time="2025-09-09T00:03:49.930902697Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.945676284s" Sep 9 00:03:50.342049 kubelet[2346]: E0909 00:03:50.341989 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="6.4s" Sep 9 00:03:50.607887 kubelet[2346]: E0909 00:03:50.607770 2346 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:03:51.179676 containerd[1508]: time="2025-09-09T00:03:51.179539139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:03:51.179676 containerd[1508]: time="2025-09-09T00:03:51.179602278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:03:51.179676 containerd[1508]: time="2025-09-09T00:03:51.179621103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:03:51.180194 containerd[1508]: time="2025-09-09T00:03:51.179795311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:03:51.203812 systemd[1]: Started cri-containerd-7735bec1f53a2b22b9cb16856d14dec4dc65786a4bd588a3f1afd5a4f7ccc420.scope - libcontainer container 7735bec1f53a2b22b9cb16856d14dec4dc65786a4bd588a3f1afd5a4f7ccc420. Sep 9 00:03:51.240345 containerd[1508]: time="2025-09-09T00:03:51.239999889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"7735bec1f53a2b22b9cb16856d14dec4dc65786a4bd588a3f1afd5a4f7ccc420\"" Sep 9 00:03:51.241127 kubelet[2346]: E0909 00:03:51.241092 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:51.242867 containerd[1508]: time="2025-09-09T00:03:51.242840582Z" level=info msg="CreateContainer within sandbox \"7735bec1f53a2b22b9cb16856d14dec4dc65786a4bd588a3f1afd5a4f7ccc420\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:03:51.523573 containerd[1508]: time="2025-09-09T00:03:51.522638878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:03:51.523573 containerd[1508]: time="2025-09-09T00:03:51.523420756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:03:51.523573 containerd[1508]: time="2025-09-09T00:03:51.523437427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:03:51.523804 containerd[1508]: time="2025-09-09T00:03:51.523534920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:03:51.541790 systemd[1]: Started cri-containerd-ca1c581930998abd413aa048d292fae9f7f1a9c28b95593a414e260b4c1b40d7.scope - libcontainer container ca1c581930998abd413aa048d292fae9f7f1a9c28b95593a414e260b4c1b40d7. Sep 9 00:03:51.542627 containerd[1508]: time="2025-09-09T00:03:51.541968517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:03:51.542627 containerd[1508]: time="2025-09-09T00:03:51.542022138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:03:51.542627 containerd[1508]: time="2025-09-09T00:03:51.542031976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:03:51.542627 containerd[1508]: time="2025-09-09T00:03:51.542110033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:03:51.570877 systemd[1]: Started cri-containerd-691cf9d39c45106aa39e403e606f2d81755ee93ee28f5074b10a8470e41d5971.scope - libcontainer container 691cf9d39c45106aa39e403e606f2d81755ee93ee28f5074b10a8470e41d5971. Sep 9 00:03:51.583666 containerd[1508]: time="2025-09-09T00:03:51.583615455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a7869c9f1549ddbee8a4185f848ae387,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca1c581930998abd413aa048d292fae9f7f1a9c28b95593a414e260b4c1b40d7\"" Sep 9 00:03:51.584520 kubelet[2346]: E0909 00:03:51.584480 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:51.586133 containerd[1508]: time="2025-09-09T00:03:51.586107424Z" level=info msg="CreateContainer within sandbox \"ca1c581930998abd413aa048d292fae9f7f1a9c28b95593a414e260b4c1b40d7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:03:51.608616 containerd[1508]: time="2025-09-09T00:03:51.608556280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"691cf9d39c45106aa39e403e606f2d81755ee93ee28f5074b10a8470e41d5971\"" Sep 9 00:03:51.609213 kubelet[2346]: E0909 00:03:51.609168 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:51.610583 containerd[1508]: time="2025-09-09T00:03:51.610537129Z" level=info msg="CreateContainer within sandbox \"691cf9d39c45106aa39e403e606f2d81755ee93ee28f5074b10a8470e41d5971\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:03:51.828604 kubelet[2346]: I0909 00:03:51.828478 2346 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:03:51.828995 kubelet[2346]: E0909 00:03:51.828953 2346 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Sep 9 00:03:51.925539 containerd[1508]: time="2025-09-09T00:03:51.925482580Z" level=info msg="CreateContainer within sandbox \"7735bec1f53a2b22b9cb16856d14dec4dc65786a4bd588a3f1afd5a4f7ccc420\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6a52c78b1824c6155a0a80ed726dd71149290d50cff57db055415c22bd24326d\"" Sep 9 00:03:51.926222 containerd[1508]: time="2025-09-09T00:03:51.926172765Z" level=info msg="StartContainer for \"6a52c78b1824c6155a0a80ed726dd71149290d50cff57db055415c22bd24326d\"" Sep 9 00:03:51.959802 systemd[1]: Started cri-containerd-6a52c78b1824c6155a0a80ed726dd71149290d50cff57db055415c22bd24326d.scope - libcontainer container 6a52c78b1824c6155a0a80ed726dd71149290d50cff57db055415c22bd24326d. Sep 9 00:03:52.085070 containerd[1508]: time="2025-09-09T00:03:52.084905922Z" level=info msg="StartContainer for \"6a52c78b1824c6155a0a80ed726dd71149290d50cff57db055415c22bd24326d\" returns successfully" Sep 9 00:03:52.158348 containerd[1508]: time="2025-09-09T00:03:52.158295655Z" level=info msg="CreateContainer within sandbox \"ca1c581930998abd413aa048d292fae9f7f1a9c28b95593a414e260b4c1b40d7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"50c0e7165f572fae27c7acd3b7bb17b317b98880befdfc00ff98a5d250d1340d\"" Sep 9 00:03:52.158787 containerd[1508]: time="2025-09-09T00:03:52.158749166Z" level=info msg="StartContainer for \"50c0e7165f572fae27c7acd3b7bb17b317b98880befdfc00ff98a5d250d1340d\"" Sep 9 00:03:52.177487 kubelet[2346]: E0909 00:03:52.177451 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:52.192821 systemd[1]: Started cri-containerd-50c0e7165f572fae27c7acd3b7bb17b317b98880befdfc00ff98a5d250d1340d.scope - libcontainer container 50c0e7165f572fae27c7acd3b7bb17b317b98880befdfc00ff98a5d250d1340d. Sep 9 00:03:52.231158 containerd[1508]: time="2025-09-09T00:03:52.231113353Z" level=info msg="CreateContainer within sandbox \"691cf9d39c45106aa39e403e606f2d81755ee93ee28f5074b10a8470e41d5971\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0f93488ced4d6901b607572754aa3ca72d804e3ba49a9d8006767b8629d551de\"" Sep 9 00:03:52.232088 containerd[1508]: time="2025-09-09T00:03:52.232044061Z" level=info msg="StartContainer for \"0f93488ced4d6901b607572754aa3ca72d804e3ba49a9d8006767b8629d551de\"" Sep 9 00:03:52.268809 systemd[1]: Started cri-containerd-0f93488ced4d6901b607572754aa3ca72d804e3ba49a9d8006767b8629d551de.scope - libcontainer container 0f93488ced4d6901b607572754aa3ca72d804e3ba49a9d8006767b8629d551de. Sep 9 00:03:52.385489 containerd[1508]: time="2025-09-09T00:03:52.385319062Z" level=info msg="StartContainer for \"50c0e7165f572fae27c7acd3b7bb17b317b98880befdfc00ff98a5d250d1340d\" returns successfully" Sep 9 00:03:52.385489 containerd[1508]: time="2025-09-09T00:03:52.385409421Z" level=info msg="StartContainer for \"0f93488ced4d6901b607572754aa3ca72d804e3ba49a9d8006767b8629d551de\" returns successfully" Sep 9 00:03:53.182575 kubelet[2346]: E0909 00:03:53.182520 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:53.184851 kubelet[2346]: E0909 00:03:53.184813 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:53.185022 kubelet[2346]: E0909 00:03:53.184989 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:54.032251 kubelet[2346]: E0909 00:03:54.032200 2346 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 9 00:03:54.127665 kubelet[2346]: I0909 00:03:54.127595 2346 apiserver.go:52] "Watching apiserver" Sep 9 00:03:54.136443 kubelet[2346]: I0909 00:03:54.136412 2346 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:03:54.186316 kubelet[2346]: E0909 00:03:54.186266 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:54.186816 kubelet[2346]: E0909 00:03:54.186331 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:54.787982 kubelet[2346]: E0909 00:03:54.787947 2346 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 9 00:03:55.190578 kubelet[2346]: E0909 00:03:55.190449 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:55.190578 kubelet[2346]: E0909 00:03:55.190526 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:55.300936 kubelet[2346]: E0909 00:03:55.300895 2346 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 9 00:03:55.517082 kubelet[2346]: E0909 00:03:55.517024 2346 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:03:56.420512 kubelet[2346]: E0909 00:03:56.420472 2346 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 9 00:03:56.811070 kubelet[2346]: E0909 00:03:56.810974 2346 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:03:58.230264 kubelet[2346]: I0909 00:03:58.230222 2346 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:03:58.348535 kubelet[2346]: I0909 00:03:58.348455 2346 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:03:58.348535 kubelet[2346]: E0909 00:03:58.348510 2346 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:04:01.274596 kubelet[2346]: E0909 00:04:01.274440 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:01.455187 systemd[1]: Reload requested from client PID 2623 ('systemctl') (unit session-9.scope)... Sep 9 00:04:01.455204 systemd[1]: Reloading... Sep 9 00:04:01.543678 zram_generator::config[2670]: No configuration found. Sep 9 00:04:01.657701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:04:01.780755 systemd[1]: Reloading finished in 325 ms. Sep 9 00:04:01.816117 kubelet[2346]: I0909 00:04:01.816006 2346 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:04:01.816105 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:04:01.835192 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:04:01.835471 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:04:01.835523 systemd[1]: kubelet.service: Consumed 865ms CPU time, 132.4M memory peak. Sep 9 00:04:01.844119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:04:02.020068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:04:02.025148 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:04:02.060105 kubelet[2712]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:04:02.060105 kubelet[2712]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:04:02.060105 kubelet[2712]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:04:02.060536 kubelet[2712]: I0909 00:04:02.060148 2712 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:04:02.067179 kubelet[2712]: I0909 00:04:02.067144 2712 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:04:02.067179 kubelet[2712]: I0909 00:04:02.067167 2712 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:04:02.067372 kubelet[2712]: I0909 00:04:02.067341 2712 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:04:02.068540 kubelet[2712]: I0909 00:04:02.068517 2712 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:04:02.070608 kubelet[2712]: I0909 00:04:02.070363 2712 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:04:02.074608 kubelet[2712]: E0909 00:04:02.073559 2712 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:04:02.074608 kubelet[2712]: I0909 00:04:02.073589 2712 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:04:02.079500 kubelet[2712]: I0909 00:04:02.079465 2712 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:04:02.079611 kubelet[2712]: I0909 00:04:02.079591 2712 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:04:02.079788 kubelet[2712]: I0909 00:04:02.079750 2712 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:04:02.079945 kubelet[2712]: I0909 00:04:02.079785 2712 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:04:02.080024 kubelet[2712]: I0909 00:04:02.079954 2712 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:04:02.080024 kubelet[2712]: I0909 00:04:02.079963 2712 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:04:02.080024 kubelet[2712]: I0909 00:04:02.079989 2712 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:04:02.080101 kubelet[2712]: I0909 00:04:02.080089 2712 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:04:02.080101 kubelet[2712]: I0909 00:04:02.080099 2712 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:04:02.080172 kubelet[2712]: I0909 00:04:02.080129 2712 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:04:02.080172 kubelet[2712]: I0909 00:04:02.080141 2712 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:04:02.081956 kubelet[2712]: I0909 00:04:02.080827 2712 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 9 00:04:02.081956 kubelet[2712]: I0909 00:04:02.081258 2712 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:04:02.081956 kubelet[2712]: I0909 00:04:02.081701 2712 server.go:1274] "Started kubelet" Sep 9 00:04:02.081956 kubelet[2712]: I0909 00:04:02.081904 2712 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:04:02.084022 kubelet[2712]: I0909 00:04:02.083986 2712 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:04:02.084152 kubelet[2712]: I0909 00:04:02.084106 2712 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:04:02.085910 kubelet[2712]: I0909 00:04:02.085890 2712 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:04:02.086044 kubelet[2712]: I0909 00:04:02.084801 2712 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:04:02.086863 kubelet[2712]: I0909 00:04:02.084704 2712 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:04:02.092049 kubelet[2712]: E0909 00:04:02.092014 2712 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:04:02.094476 kubelet[2712]: I0909 00:04:02.094453 2712 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:04:02.095862 kubelet[2712]: I0909 00:04:02.095834 2712 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:04:02.096038 kubelet[2712]: I0909 00:04:02.096006 2712 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:04:02.096710 kubelet[2712]: I0909 00:04:02.096693 2712 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:04:02.096951 kubelet[2712]: I0909 00:04:02.096935 2712 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:04:02.099025 kubelet[2712]: I0909 00:04:02.098994 2712 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:04:02.109825 kubelet[2712]: I0909 00:04:02.109764 2712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:04:02.111757 kubelet[2712]: I0909 00:04:02.111728 2712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:04:02.111832 kubelet[2712]: I0909 00:04:02.111761 2712 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:04:02.111832 kubelet[2712]: I0909 00:04:02.111783 2712 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:04:02.111898 kubelet[2712]: E0909 00:04:02.111834 2712 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:04:02.135360 kubelet[2712]: I0909 00:04:02.135329 2712 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:04:02.135360 kubelet[2712]: I0909 00:04:02.135348 2712 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:04:02.135360 kubelet[2712]: I0909 00:04:02.135367 2712 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:04:02.135534 kubelet[2712]: I0909 00:04:02.135507 2712 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:04:02.135534 kubelet[2712]: I0909 00:04:02.135517 2712 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:04:02.135534 kubelet[2712]: I0909 00:04:02.135534 2712 policy_none.go:49] "None policy: Start" Sep 9 00:04:02.136130 kubelet[2712]: I0909 00:04:02.136092 2712 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:04:02.136130 kubelet[2712]: I0909 00:04:02.136131 2712 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:04:02.136346 kubelet[2712]: I0909 00:04:02.136258 2712 state_mem.go:75] "Updated machine memory state" Sep 9 00:04:02.140957 kubelet[2712]: I0909 00:04:02.140935 2712 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:04:02.141241 kubelet[2712]: I0909 00:04:02.141220 2712 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:04:02.141316 kubelet[2712]: I0909 00:04:02.141236 2712 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:04:02.141445 kubelet[2712]: I0909 00:04:02.141425 2712 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:04:02.251132 kubelet[2712]: I0909 00:04:02.251075 2712 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:04:02.299212 kubelet[2712]: I0909 00:04:02.299031 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:04:02.299212 kubelet[2712]: I0909 00:04:02.299079 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7869c9f1549ddbee8a4185f848ae387-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7869c9f1549ddbee8a4185f848ae387\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:04:02.299212 kubelet[2712]: I0909 00:04:02.299103 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:04:02.299212 kubelet[2712]: I0909 00:04:02.299126 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:04:02.299212 kubelet[2712]: I0909 00:04:02.299146 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:04:02.299490 kubelet[2712]: I0909 00:04:02.299169 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7869c9f1549ddbee8a4185f848ae387-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7869c9f1549ddbee8a4185f848ae387\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:04:02.299490 kubelet[2712]: I0909 00:04:02.299188 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7869c9f1549ddbee8a4185f848ae387-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a7869c9f1549ddbee8a4185f848ae387\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:04:02.299490 kubelet[2712]: I0909 00:04:02.299207 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:04:02.299490 kubelet[2712]: I0909 00:04:02.299225 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:04:02.469990 kubelet[2712]: E0909 00:04:02.469946 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:02.469990 kubelet[2712]: E0909 00:04:02.469978 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:02.502974 kubelet[2712]: E0909 00:04:02.502748 2712 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:04:02.503174 kubelet[2712]: E0909 00:04:02.503019 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:02.504923 kubelet[2712]: I0909 00:04:02.504870 2712 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 00:04:02.505005 kubelet[2712]: I0909 00:04:02.504975 2712 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:04:03.081143 kubelet[2712]: I0909 00:04:03.081102 2712 apiserver.go:52] "Watching apiserver" Sep 9 00:04:03.097193 kubelet[2712]: I0909 00:04:03.097160 2712 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:04:03.125046 kubelet[2712]: E0909 00:04:03.125023 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:03.242763 sudo[2748]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 00:04:03.243300 sudo[2748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 00:04:03.376736 kubelet[2712]: I0909 00:04:03.375356 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3753349510000001 podStartE2EDuration="1.375334951s" podCreationTimestamp="2025-09-09 00:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:04:03.375055156 +0000 UTC m=+1.345434905" watchObservedRunningTime="2025-09-09 00:04:03.375334951 +0000 UTC m=+1.345714700" Sep 9 00:04:03.376736 kubelet[2712]: E0909 00:04:03.376160 2712 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:04:03.376736 kubelet[2712]: E0909 00:04:03.376326 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:03.376736 kubelet[2712]: E0909 00:04:03.376357 2712 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:04:03.376736 kubelet[2712]: E0909 00:04:03.376534 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:03.688710 kubelet[2712]: I0909 00:04:03.687978 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.687958637 podStartE2EDuration="1.687958637s" podCreationTimestamp="2025-09-09 00:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:04:03.617562844 +0000 UTC m=+1.587942593" watchObservedRunningTime="2025-09-09 00:04:03.687958637 +0000 UTC m=+1.658338386" Sep 9 00:04:03.730163 sudo[2748]: pam_unix(sudo:session): session closed for user root Sep 9 00:04:03.907185 kubelet[2712]: I0909 00:04:03.906904 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.906880602 podStartE2EDuration="2.906880602s" podCreationTimestamp="2025-09-09 00:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:04:03.688172118 +0000 UTC m=+1.658551868" watchObservedRunningTime="2025-09-09 00:04:03.906880602 +0000 UTC m=+1.877260351" Sep 9 00:04:04.126041 kubelet[2712]: E0909 00:04:04.126002 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:04.126041 kubelet[2712]: E0909 00:04:04.126002 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:05.524854 sudo[1709]: pam_unix(sudo:session): session closed for user root Sep 9 00:04:05.526744 sshd[1708]: Connection closed by 10.0.0.1 port 44488 Sep 9 00:04:05.527498 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Sep 9 00:04:05.532088 systemd[1]: sshd@8-10.0.0.143:22-10.0.0.1:44488.service: Deactivated successfully. Sep 9 00:04:05.534426 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:04:05.534672 systemd[1]: session-9.scope: Consumed 5.195s CPU time, 251.5M memory peak. Sep 9 00:04:05.535896 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:04:05.537069 systemd-logind[1496]: Removed session 9. Sep 9 00:04:05.780349 kubelet[2712]: E0909 00:04:05.780202 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:05.918315 kubelet[2712]: I0909 00:04:05.918269 2712 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:04:05.918706 containerd[1508]: time="2025-09-09T00:04:05.918637682Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:04:05.919128 kubelet[2712]: I0909 00:04:05.918869 2712 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:04:06.130220 kubelet[2712]: E0909 00:04:06.130084 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:06.441303 kubelet[2712]: E0909 00:04:06.441124 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:06.890050 systemd[1]: Created slice kubepods-burstable-pod2d032543_b3bf_4890_8439_c9581477f52f.slice - libcontainer container kubepods-burstable-pod2d032543_b3bf_4890_8439_c9581477f52f.slice. Sep 9 00:04:06.898481 systemd[1]: Created slice kubepods-besteffort-podf63b9c97_80ee_4f2a_8549_8dee0be2e1de.slice - libcontainer container kubepods-besteffort-podf63b9c97_80ee_4f2a_8549_8dee0be2e1de.slice. Sep 9 00:04:06.922608 kubelet[2712]: I0909 00:04:06.922529 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwg8d\" (UniqueName: \"kubernetes.io/projected/f63b9c97-80ee-4f2a-8549-8dee0be2e1de-kube-api-access-qwg8d\") pod \"kube-proxy-8k5l9\" (UID: \"f63b9c97-80ee-4f2a-8549-8dee0be2e1de\") " pod="kube-system/kube-proxy-8k5l9" Sep 9 00:04:06.922608 kubelet[2712]: I0909 00:04:06.922594 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-host-proc-sys-net\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.922608 kubelet[2712]: I0909 00:04:06.922616 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d032543-b3bf-4890-8439-c9581477f52f-cilium-config-path\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.923176 kubelet[2712]: I0909 00:04:06.922630 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d032543-b3bf-4890-8439-c9581477f52f-hubble-tls\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.923176 kubelet[2712]: I0909 00:04:06.922665 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-xtables-lock\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.923176 kubelet[2712]: I0909 00:04:06.922707 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-cni-path\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.923176 kubelet[2712]: I0909 00:04:06.922723 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjfjs\" (UniqueName: \"kubernetes.io/projected/2d032543-b3bf-4890-8439-c9581477f52f-kube-api-access-fjfjs\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.923176 kubelet[2712]: I0909 00:04:06.922741 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f63b9c97-80ee-4f2a-8549-8dee0be2e1de-xtables-lock\") pod \"kube-proxy-8k5l9\" (UID: \"f63b9c97-80ee-4f2a-8549-8dee0be2e1de\") " pod="kube-system/kube-proxy-8k5l9" Sep 9 00:04:06.923176 kubelet[2712]: I0909 00:04:06.922754 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-bpf-maps\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.923470 kubelet[2712]: I0909 00:04:06.922768 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-cilium-cgroup\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.923470 kubelet[2712]: I0909 00:04:06.922790 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f63b9c97-80ee-4f2a-8549-8dee0be2e1de-lib-modules\") pod \"kube-proxy-8k5l9\" (UID: \"f63b9c97-80ee-4f2a-8549-8dee0be2e1de\") " pod="kube-system/kube-proxy-8k5l9" Sep 9 00:04:06.923470 kubelet[2712]: I0909 00:04:06.922805 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-cilium-run\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.923470 kubelet[2712]: I0909 00:04:06.922819 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-hostproc\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.923470 kubelet[2712]: I0909 00:04:06.922832 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-etc-cni-netd\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.923470 kubelet[2712]: I0909 00:04:06.922864 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d032543-b3bf-4890-8439-c9581477f52f-clustermesh-secrets\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.923622 kubelet[2712]: I0909 00:04:06.922887 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-host-proc-sys-kernel\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:06.923622 kubelet[2712]: I0909 00:04:06.922902 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f63b9c97-80ee-4f2a-8549-8dee0be2e1de-kube-proxy\") pod \"kube-proxy-8k5l9\" (UID: \"f63b9c97-80ee-4f2a-8549-8dee0be2e1de\") " pod="kube-system/kube-proxy-8k5l9" Sep 9 00:04:06.923622 kubelet[2712]: I0909 00:04:06.922916 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-lib-modules\") pod \"cilium-4nwfd\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " pod="kube-system/cilium-4nwfd" Sep 9 00:04:07.080122 systemd[1]: Created slice kubepods-besteffort-podff4f784b_22ec_4a63_97ec_f1e96a529319.slice - libcontainer container kubepods-besteffort-podff4f784b_22ec_4a63_97ec_f1e96a529319.slice. Sep 9 00:04:07.125219 kubelet[2712]: I0909 00:04:07.125163 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k92cz\" (UniqueName: \"kubernetes.io/projected/ff4f784b-22ec-4a63-97ec-f1e96a529319-kube-api-access-k92cz\") pod \"cilium-operator-5d85765b45-gk7zf\" (UID: \"ff4f784b-22ec-4a63-97ec-f1e96a529319\") " pod="kube-system/cilium-operator-5d85765b45-gk7zf" Sep 9 00:04:07.125219 kubelet[2712]: I0909 00:04:07.125209 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff4f784b-22ec-4a63-97ec-f1e96a529319-cilium-config-path\") pod \"cilium-operator-5d85765b45-gk7zf\" (UID: \"ff4f784b-22ec-4a63-97ec-f1e96a529319\") " pod="kube-system/cilium-operator-5d85765b45-gk7zf" Sep 9 00:04:07.130834 kubelet[2712]: E0909 00:04:07.130803 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:07.168082 kubelet[2712]: E0909 00:04:07.167954 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:07.193949 kubelet[2712]: E0909 00:04:07.193913 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:07.194486 containerd[1508]: time="2025-09-09T00:04:07.194436449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4nwfd,Uid:2d032543-b3bf-4890-8439-c9581477f52f,Namespace:kube-system,Attempt:0,}" Sep 9 00:04:07.205765 kubelet[2712]: E0909 00:04:07.205731 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:07.206433 containerd[1508]: time="2025-09-09T00:04:07.206183007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8k5l9,Uid:f63b9c97-80ee-4f2a-8549-8dee0be2e1de,Namespace:kube-system,Attempt:0,}" Sep 9 00:04:07.390858 kubelet[2712]: E0909 00:04:07.390823 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:07.391405 containerd[1508]: time="2025-09-09T00:04:07.391327283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gk7zf,Uid:ff4f784b-22ec-4a63-97ec-f1e96a529319,Namespace:kube-system,Attempt:0,}" Sep 9 00:04:07.676487 containerd[1508]: time="2025-09-09T00:04:07.676326056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:04:07.677332 containerd[1508]: time="2025-09-09T00:04:07.677144822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:04:07.677332 containerd[1508]: time="2025-09-09T00:04:07.677172554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:04:07.677332 containerd[1508]: time="2025-09-09T00:04:07.677270878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:04:07.705113 systemd[1]: Started cri-containerd-d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71.scope - libcontainer container d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71. Sep 9 00:04:07.707069 containerd[1508]: time="2025-09-09T00:04:07.706955590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:04:07.709594 containerd[1508]: time="2025-09-09T00:04:07.707052592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:04:07.709594 containerd[1508]: time="2025-09-09T00:04:07.708006582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:04:07.709594 containerd[1508]: time="2025-09-09T00:04:07.708096791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:04:07.729858 systemd[1]: Started cri-containerd-854c1d08d14b6a655b9410468e62f68e521a2397b2f46dd38920551f92e1fb6e.scope - libcontainer container 854c1d08d14b6a655b9410468e62f68e521a2397b2f46dd38920551f92e1fb6e. Sep 9 00:04:07.738269 containerd[1508]: time="2025-09-09T00:04:07.737268098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:04:07.738269 containerd[1508]: time="2025-09-09T00:04:07.738083779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:04:07.738269 containerd[1508]: time="2025-09-09T00:04:07.738098577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:04:07.738549 containerd[1508]: time="2025-09-09T00:04:07.738280688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:04:07.738663 containerd[1508]: time="2025-09-09T00:04:07.738586262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4nwfd,Uid:2d032543-b3bf-4890-8439-c9581477f52f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\"" Sep 9 00:04:07.739667 kubelet[2712]: E0909 00:04:07.739621 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:07.744234 containerd[1508]: time="2025-09-09T00:04:07.744115124Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:04:07.762851 systemd[1]: Started cri-containerd-a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a.scope - libcontainer container a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a. Sep 9 00:04:07.774891 containerd[1508]: time="2025-09-09T00:04:07.774848253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8k5l9,Uid:f63b9c97-80ee-4f2a-8549-8dee0be2e1de,Namespace:kube-system,Attempt:0,} returns sandbox id \"854c1d08d14b6a655b9410468e62f68e521a2397b2f46dd38920551f92e1fb6e\"" Sep 9 00:04:07.776136 kubelet[2712]: E0909 00:04:07.776104 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:07.780333 containerd[1508]: time="2025-09-09T00:04:07.780292637Z" level=info msg="CreateContainer within sandbox \"854c1d08d14b6a655b9410468e62f68e521a2397b2f46dd38920551f92e1fb6e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:04:07.806630 containerd[1508]: time="2025-09-09T00:04:07.806573372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gk7zf,Uid:ff4f784b-22ec-4a63-97ec-f1e96a529319,Namespace:kube-system,Attempt:0,} returns sandbox id \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\"" Sep 9 00:04:07.807268 kubelet[2712]: E0909 00:04:07.807226 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:07.985248 containerd[1508]: time="2025-09-09T00:04:07.985198257Z" level=info msg="CreateContainer within sandbox \"854c1d08d14b6a655b9410468e62f68e521a2397b2f46dd38920551f92e1fb6e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dcdacc2682d39b4ac03a167368716997a86192b7644590917130bdfca5a98ad5\"" Sep 9 00:04:07.985847 containerd[1508]: time="2025-09-09T00:04:07.985805547Z" level=info msg="StartContainer for \"dcdacc2682d39b4ac03a167368716997a86192b7644590917130bdfca5a98ad5\"" Sep 9 00:04:08.015803 systemd[1]: Started cri-containerd-dcdacc2682d39b4ac03a167368716997a86192b7644590917130bdfca5a98ad5.scope - libcontainer container dcdacc2682d39b4ac03a167368716997a86192b7644590917130bdfca5a98ad5. Sep 9 00:04:08.094056 containerd[1508]: time="2025-09-09T00:04:08.093249065Z" level=info msg="StartContainer for \"dcdacc2682d39b4ac03a167368716997a86192b7644590917130bdfca5a98ad5\" returns successfully" Sep 9 00:04:08.141436 kubelet[2712]: E0909 00:04:08.141382 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:08.148051 kubelet[2712]: E0909 00:04:08.148023 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:08.317328 kubelet[2712]: I0909 00:04:08.316902 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8k5l9" podStartSLOduration=2.31688186 podStartE2EDuration="2.31688186s" podCreationTimestamp="2025-09-09 00:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:04:08.31649755 +0000 UTC m=+6.286877309" watchObservedRunningTime="2025-09-09 00:04:08.31688186 +0000 UTC m=+6.287261619" Sep 9 00:04:09.148869 kubelet[2712]: E0909 00:04:09.148833 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:18.522918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883867867.mount: Deactivated successfully. Sep 9 00:04:24.524446 containerd[1508]: time="2025-09-09T00:04:24.524388290Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:04:24.543525 containerd[1508]: time="2025-09-09T00:04:24.543447918Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 00:04:24.565700 containerd[1508]: time="2025-09-09T00:04:24.565634618Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:04:24.567415 containerd[1508]: time="2025-09-09T00:04:24.567374258Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.823186927s" Sep 9 00:04:24.567415 containerd[1508]: time="2025-09-09T00:04:24.567410857Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 00:04:24.568608 containerd[1508]: time="2025-09-09T00:04:24.568569100Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:04:24.570175 containerd[1508]: time="2025-09-09T00:04:24.570133974Z" level=info msg="CreateContainer within sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:04:25.665279 containerd[1508]: time="2025-09-09T00:04:25.665215664Z" level=info msg="CreateContainer within sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd\"" Sep 9 00:04:25.665748 containerd[1508]: time="2025-09-09T00:04:25.665539334Z" level=info msg="StartContainer for \"904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd\"" Sep 9 00:04:25.697790 systemd[1]: Started cri-containerd-904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd.scope - libcontainer container 904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd. Sep 9 00:04:25.748330 systemd[1]: cri-containerd-904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd.scope: Deactivated successfully. Sep 9 00:04:27.463742 containerd[1508]: time="2025-09-09T00:04:27.463686193Z" level=info msg="StartContainer for \"904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd\" returns successfully" Sep 9 00:04:27.483902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd-rootfs.mount: Deactivated successfully. Sep 9 00:04:27.564130 kubelet[2712]: E0909 00:04:27.564033 2712 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.352s" Sep 9 00:04:27.859478 containerd[1508]: time="2025-09-09T00:04:27.859394220Z" level=info msg="shim disconnected" id=904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd namespace=k8s.io Sep 9 00:04:27.859478 containerd[1508]: time="2025-09-09T00:04:27.859471107Z" level=warning msg="cleaning up after shim disconnected" id=904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd namespace=k8s.io Sep 9 00:04:27.859478 containerd[1508]: time="2025-09-09T00:04:27.859487489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:04:28.468424 kubelet[2712]: E0909 00:04:28.468387 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:28.470033 containerd[1508]: time="2025-09-09T00:04:28.469986886Z" level=info msg="CreateContainer within sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:04:29.212064 systemd[1]: Started sshd@9-10.0.0.143:22-10.0.0.1:52788.service - OpenSSH per-connection server daemon (10.0.0.1:52788). Sep 9 00:04:29.360446 sshd[3176]: Accepted publickey for core from 10.0.0.1 port 52788 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:04:29.362155 sshd-session[3176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:04:29.376740 systemd-logind[1496]: New session 10 of user core. Sep 9 00:04:29.388790 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:04:29.485258 containerd[1508]: time="2025-09-09T00:04:29.485214614Z" level=info msg="CreateContainer within sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e\"" Sep 9 00:04:29.487125 containerd[1508]: time="2025-09-09T00:04:29.485779866Z" level=info msg="StartContainer for \"cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e\"" Sep 9 00:04:29.517018 systemd[1]: Started cri-containerd-cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e.scope - libcontainer container cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e. Sep 9 00:04:29.541368 sshd[3178]: Connection closed by 10.0.0.1 port 52788 Sep 9 00:04:29.542615 sshd-session[3176]: pam_unix(sshd:session): session closed for user core Sep 9 00:04:29.547590 systemd[1]: sshd@9-10.0.0.143:22-10.0.0.1:52788.service: Deactivated successfully. Sep 9 00:04:29.550012 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:04:29.550687 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:04:29.551528 systemd-logind[1496]: Removed session 10. Sep 9 00:04:29.623220 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:04:29.623461 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:04:29.623966 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:04:29.631980 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:04:29.632335 systemd[1]: cri-containerd-cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e.scope: Deactivated successfully. Sep 9 00:04:29.647474 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:04:29.648291 containerd[1508]: time="2025-09-09T00:04:29.648242609Z" level=info msg="StartContainer for \"cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e\" returns successfully" Sep 9 00:04:29.919296 containerd[1508]: time="2025-09-09T00:04:29.919136638Z" level=info msg="shim disconnected" id=cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e namespace=k8s.io Sep 9 00:04:29.919296 containerd[1508]: time="2025-09-09T00:04:29.919199909Z" level=warning msg="cleaning up after shim disconnected" id=cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e namespace=k8s.io Sep 9 00:04:29.919296 containerd[1508]: time="2025-09-09T00:04:29.919210229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:04:30.363857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e-rootfs.mount: Deactivated successfully. Sep 9 00:04:30.472927 kubelet[2712]: E0909 00:04:30.472902 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:30.474505 containerd[1508]: time="2025-09-09T00:04:30.474426157Z" level=info msg="CreateContainer within sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:04:30.991119 containerd[1508]: time="2025-09-09T00:04:30.991068108Z" level=info msg="CreateContainer within sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e\"" Sep 9 00:04:30.991550 containerd[1508]: time="2025-09-09T00:04:30.991504704Z" level=info msg="StartContainer for \"75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e\"" Sep 9 00:04:31.019785 systemd[1]: Started cri-containerd-75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e.scope - libcontainer container 75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e. Sep 9 00:04:31.050716 systemd[1]: cri-containerd-75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e.scope: Deactivated successfully. Sep 9 00:04:31.139144 containerd[1508]: time="2025-09-09T00:04:31.139095140Z" level=info msg="StartContainer for \"75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e\" returns successfully" Sep 9 00:04:31.317061 containerd[1508]: time="2025-09-09T00:04:31.316911716Z" level=info msg="shim disconnected" id=75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e namespace=k8s.io Sep 9 00:04:31.317061 containerd[1508]: time="2025-09-09T00:04:31.316977913Z" level=warning msg="cleaning up after shim disconnected" id=75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e namespace=k8s.io Sep 9 00:04:31.317061 containerd[1508]: time="2025-09-09T00:04:31.316987752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:04:31.364031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e-rootfs.mount: Deactivated successfully. Sep 9 00:04:31.482287 kubelet[2712]: E0909 00:04:31.482257 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:31.484043 containerd[1508]: time="2025-09-09T00:04:31.483894912Z" level=info msg="CreateContainer within sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:04:31.571946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691002072.mount: Deactivated successfully. Sep 9 00:04:31.605483 containerd[1508]: time="2025-09-09T00:04:31.605447046Z" level=info msg="CreateContainer within sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f\"" Sep 9 00:04:31.605885 containerd[1508]: time="2025-09-09T00:04:31.605841690Z" level=info msg="StartContainer for \"260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f\"" Sep 9 00:04:31.634781 systemd[1]: Started cri-containerd-260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f.scope - libcontainer container 260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f. Sep 9 00:04:31.657636 systemd[1]: cri-containerd-260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f.scope: Deactivated successfully. Sep 9 00:04:31.688339 containerd[1508]: time="2025-09-09T00:04:31.688277138Z" level=info msg="StartContainer for \"260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f\" returns successfully" Sep 9 00:04:31.895944 containerd[1508]: time="2025-09-09T00:04:31.895788186Z" level=info msg="shim disconnected" id=260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f namespace=k8s.io Sep 9 00:04:31.895944 containerd[1508]: time="2025-09-09T00:04:31.895847511Z" level=warning msg="cleaning up after shim disconnected" id=260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f namespace=k8s.io Sep 9 00:04:31.895944 containerd[1508]: time="2025-09-09T00:04:31.895860425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:04:32.363567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f-rootfs.mount: Deactivated successfully. Sep 9 00:04:32.485484 kubelet[2712]: E0909 00:04:32.485447 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:32.487312 containerd[1508]: time="2025-09-09T00:04:32.487264054Z" level=info msg="CreateContainer within sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:04:32.801083 containerd[1508]: time="2025-09-09T00:04:32.801036127Z" level=info msg="CreateContainer within sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\"" Sep 9 00:04:32.802093 containerd[1508]: time="2025-09-09T00:04:32.801368612Z" level=info msg="StartContainer for \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\"" Sep 9 00:04:32.830909 systemd[1]: Started cri-containerd-a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56.scope - libcontainer container a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56. Sep 9 00:04:32.898283 containerd[1508]: time="2025-09-09T00:04:32.898227377Z" level=info msg="StartContainer for \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\" returns successfully" Sep 9 00:04:33.000750 kubelet[2712]: I0909 00:04:33.000478 2712 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 00:04:33.030354 systemd[1]: Created slice kubepods-burstable-pod1740c2bf_9f72_4d86_91f2_c4689766092b.slice - libcontainer container kubepods-burstable-pod1740c2bf_9f72_4d86_91f2_c4689766092b.slice. Sep 9 00:04:33.043947 systemd[1]: Created slice kubepods-burstable-pod44cadf20_75f6_49e6_9ff8_c7d8fef6e9f9.slice - libcontainer container kubepods-burstable-pod44cadf20_75f6_49e6_9ff8_c7d8fef6e9f9.slice. Sep 9 00:04:33.104174 kubelet[2712]: I0909 00:04:33.103956 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn4zm\" (UniqueName: \"kubernetes.io/projected/44cadf20-75f6-49e6-9ff8-c7d8fef6e9f9-kube-api-access-vn4zm\") pod \"coredns-7c65d6cfc9-dhsm8\" (UID: \"44cadf20-75f6-49e6-9ff8-c7d8fef6e9f9\") " pod="kube-system/coredns-7c65d6cfc9-dhsm8" Sep 9 00:04:33.104174 kubelet[2712]: I0909 00:04:33.104004 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-727cr\" (UniqueName: \"kubernetes.io/projected/1740c2bf-9f72-4d86-91f2-c4689766092b-kube-api-access-727cr\") pod \"coredns-7c65d6cfc9-68p2d\" (UID: \"1740c2bf-9f72-4d86-91f2-c4689766092b\") " pod="kube-system/coredns-7c65d6cfc9-68p2d" Sep 9 00:04:33.104174 kubelet[2712]: I0909 00:04:33.104044 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1740c2bf-9f72-4d86-91f2-c4689766092b-config-volume\") pod \"coredns-7c65d6cfc9-68p2d\" (UID: \"1740c2bf-9f72-4d86-91f2-c4689766092b\") " pod="kube-system/coredns-7c65d6cfc9-68p2d" Sep 9 00:04:33.104174 kubelet[2712]: I0909 00:04:33.104065 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44cadf20-75f6-49e6-9ff8-c7d8fef6e9f9-config-volume\") pod \"coredns-7c65d6cfc9-dhsm8\" (UID: \"44cadf20-75f6-49e6-9ff8-c7d8fef6e9f9\") " pod="kube-system/coredns-7c65d6cfc9-dhsm8" Sep 9 00:04:33.333954 kubelet[2712]: E0909 00:04:33.333914 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:33.334593 containerd[1508]: time="2025-09-09T00:04:33.334537734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-68p2d,Uid:1740c2bf-9f72-4d86-91f2-c4689766092b,Namespace:kube-system,Attempt:0,}" Sep 9 00:04:33.346193 kubelet[2712]: E0909 00:04:33.346154 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:33.346606 containerd[1508]: time="2025-09-09T00:04:33.346559899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dhsm8,Uid:44cadf20-75f6-49e6-9ff8-c7d8fef6e9f9,Namespace:kube-system,Attempt:0,}" Sep 9 00:04:33.366133 systemd[1]: run-containerd-runc-k8s.io-a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56-runc.9CRQny.mount: Deactivated successfully. Sep 9 00:04:33.490464 kubelet[2712]: E0909 00:04:33.490428 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:33.567350 kubelet[2712]: I0909 00:04:33.567296 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4nwfd" podStartSLOduration=10.741660608 podStartE2EDuration="27.56727744s" podCreationTimestamp="2025-09-09 00:04:06 +0000 UTC" firstStartedPulling="2025-09-09 00:04:07.742821527 +0000 UTC m=+5.713201276" lastFinishedPulling="2025-09-09 00:04:24.568438359 +0000 UTC m=+22.538818108" observedRunningTime="2025-09-09 00:04:33.566514423 +0000 UTC m=+31.536894172" watchObservedRunningTime="2025-09-09 00:04:33.56727744 +0000 UTC m=+31.537657189" Sep 9 00:04:34.492033 kubelet[2712]: E0909 00:04:34.491989 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:34.557682 systemd[1]: Started sshd@10-10.0.0.143:22-10.0.0.1:59574.service - OpenSSH per-connection server daemon (10.0.0.1:59574). Sep 9 00:04:34.605618 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 59574 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:04:34.607132 sshd-session[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:04:34.611439 systemd-logind[1496]: New session 11 of user core. Sep 9 00:04:34.625785 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:04:34.738863 sshd[3514]: Connection closed by 10.0.0.1 port 59574 Sep 9 00:04:34.739292 sshd-session[3512]: pam_unix(sshd:session): session closed for user core Sep 9 00:04:34.743227 systemd[1]: sshd@10-10.0.0.143:22-10.0.0.1:59574.service: Deactivated successfully. Sep 9 00:04:34.745789 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:04:34.746460 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:04:34.747275 systemd-logind[1496]: Removed session 11. Sep 9 00:04:35.493838 kubelet[2712]: E0909 00:04:35.493805 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:38.939789 containerd[1508]: time="2025-09-09T00:04:38.939709203Z" level=error msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" failed" error="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/manifests/sha256:abf63fbd0ffcce60094c94bcc74957a4ff308727b66a2019c3b208fea275acda: 504 Gateway Time-out" Sep 9 00:04:38.940457 containerd[1508]: time="2025-09-09T00:04:38.939742326Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=875" Sep 9 00:04:38.940515 kubelet[2712]: E0909 00:04:38.939997 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/manifests/sha256:abf63fbd0ffcce60094c94bcc74957a4ff308727b66a2019c3b208fea275acda: 504 Gateway Time-out" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Sep 9 00:04:38.940515 kubelet[2712]: E0909 00:04:38.940057 2712 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/manifests/sha256:abf63fbd0ffcce60094c94bcc74957a4ff308727b66a2019c3b208fea275acda: 504 Gateway Time-out" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Sep 9 00:04:38.941240 kubelet[2712]: E0909 00:04:38.941160 2712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cilium-operator,Image:quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Command:[cilium-operator-generic],Args:[--config-dir=/tmp/cilium/config-map --debug=$(CILIUM_DEBUG)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:K8S_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_K8S_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_DEBUG,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:cilium-config,},Key:debug,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cilium-config-path,ReadOnly:true,MountPath:/tmp/cilium/config-map,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k92cz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 9234 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-operator-5d85765b45-gk7zf_kube-system(ff4f784b-22ec-4a63-97ec-f1e96a529319): ErrImagePull: failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/manifests/sha256:abf63fbd0ffcce60094c94bcc74957a4ff308727b66a2019c3b208fea275acda: 504 Gateway Time-out" logger="UnhandledError" Sep 9 00:04:38.942387 kubelet[2712]: E0909 00:04:38.942340 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/manifests/sha256:abf63fbd0ffcce60094c94bcc74957a4ff308727b66a2019c3b208fea275acda: 504 Gateway Time-out\"" pod="kube-system/cilium-operator-5d85765b45-gk7zf" podUID="ff4f784b-22ec-4a63-97ec-f1e96a529319" Sep 9 00:04:39.500186 kubelet[2712]: E0909 00:04:39.499978 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:39.500932 kubelet[2712]: E0909 00:04:39.500903 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"\"" pod="kube-system/cilium-operator-5d85765b45-gk7zf" podUID="ff4f784b-22ec-4a63-97ec-f1e96a529319" Sep 9 00:04:39.755921 systemd[1]: Started sshd@11-10.0.0.143:22-10.0.0.1:59576.service - OpenSSH per-connection server daemon (10.0.0.1:59576). Sep 9 00:04:39.798981 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 59576 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:04:39.800412 sshd-session[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:04:39.804589 systemd-logind[1496]: New session 12 of user core. Sep 9 00:04:39.815790 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:04:40.421346 sshd[3533]: Connection closed by 10.0.0.1 port 59576 Sep 9 00:04:40.421799 sshd-session[3531]: pam_unix(sshd:session): session closed for user core Sep 9 00:04:40.427003 systemd[1]: sshd@11-10.0.0.143:22-10.0.0.1:59576.service: Deactivated successfully. Sep 9 00:04:40.429820 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:04:40.430682 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:04:40.431662 systemd-logind[1496]: Removed session 12. Sep 9 00:04:45.434654 systemd[1]: Started sshd@12-10.0.0.143:22-10.0.0.1:55172.service - OpenSSH per-connection server daemon (10.0.0.1:55172). Sep 9 00:04:45.481489 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 55172 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:04:45.483023 sshd-session[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:04:45.487131 systemd-logind[1496]: New session 13 of user core. Sep 9 00:04:45.494782 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:04:45.602103 sshd[3549]: Connection closed by 10.0.0.1 port 55172 Sep 9 00:04:45.602491 sshd-session[3547]: pam_unix(sshd:session): session closed for user core Sep 9 00:04:45.606516 systemd[1]: sshd@12-10.0.0.143:22-10.0.0.1:55172.service: Deactivated successfully. Sep 9 00:04:45.608703 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:04:45.609426 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:04:45.610608 systemd-logind[1496]: Removed session 13. Sep 9 00:04:50.615572 systemd[1]: Started sshd@13-10.0.0.143:22-10.0.0.1:41222.service - OpenSSH per-connection server daemon (10.0.0.1:41222). Sep 9 00:04:50.657625 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 41222 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:04:50.659003 sshd-session[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:04:50.663291 systemd-logind[1496]: New session 14 of user core. Sep 9 00:04:50.670791 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:04:50.776916 sshd[3566]: Connection closed by 10.0.0.1 port 41222 Sep 9 00:04:50.777269 sshd-session[3564]: pam_unix(sshd:session): session closed for user core Sep 9 00:04:50.781127 systemd[1]: sshd@13-10.0.0.143:22-10.0.0.1:41222.service: Deactivated successfully. Sep 9 00:04:50.783756 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:04:50.784471 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:04:50.785881 systemd-logind[1496]: Removed session 14. Sep 9 00:04:54.113447 kubelet[2712]: E0909 00:04:54.113377 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:54.115700 containerd[1508]: time="2025-09-09T00:04:54.115199876Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:04:54.954845 kubelet[2712]: E0909 00:04:54.954784 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:04:55.348316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936712963.mount: Deactivated successfully. Sep 9 00:04:55.796878 systemd[1]: Started sshd@14-10.0.0.143:22-10.0.0.1:41236.service - OpenSSH per-connection server daemon (10.0.0.1:41236). Sep 9 00:04:55.891412 sshd[3594]: Accepted publickey for core from 10.0.0.1 port 41236 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:04:55.892938 sshd-session[3594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:04:55.897274 systemd-logind[1496]: New session 15 of user core. Sep 9 00:04:55.908770 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:04:56.019407 sshd[3596]: Connection closed by 10.0.0.1 port 41236 Sep 9 00:04:56.019822 sshd-session[3594]: pam_unix(sshd:session): session closed for user core Sep 9 00:04:56.035882 systemd[1]: sshd@14-10.0.0.143:22-10.0.0.1:41236.service: Deactivated successfully. Sep 9 00:04:56.038526 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:04:56.040315 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:04:56.050023 systemd[1]: Started sshd@15-10.0.0.143:22-10.0.0.1:41242.service - OpenSSH per-connection server daemon (10.0.0.1:41242). Sep 9 00:04:56.051437 systemd-logind[1496]: Removed session 15. Sep 9 00:04:56.088040 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 41242 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:04:56.089492 sshd-session[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:04:56.093936 systemd-logind[1496]: New session 16 of user core. Sep 9 00:04:56.104781 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:04:56.839753 sshd[3612]: Connection closed by 10.0.0.1 port 41242 Sep 9 00:04:56.840200 sshd-session[3609]: pam_unix(sshd:session): session closed for user core Sep 9 00:04:56.850817 systemd[1]: sshd@15-10.0.0.143:22-10.0.0.1:41242.service: Deactivated successfully. Sep 9 00:04:56.852879 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:04:56.854374 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:04:56.861993 systemd[1]: Started sshd@16-10.0.0.143:22-10.0.0.1:41246.service - OpenSSH per-connection server daemon (10.0.0.1:41246). Sep 9 00:04:56.863067 systemd-logind[1496]: Removed session 16. Sep 9 00:04:56.903597 sshd[3622]: Accepted publickey for core from 10.0.0.1 port 41246 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:04:56.905143 sshd-session[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:04:56.909574 systemd-logind[1496]: New session 17 of user core. Sep 9 00:04:56.917824 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:04:57.129804 sshd[3625]: Connection closed by 10.0.0.1 port 41246 Sep 9 00:04:57.130136 sshd-session[3622]: pam_unix(sshd:session): session closed for user core Sep 9 00:04:57.134081 systemd[1]: sshd@16-10.0.0.143:22-10.0.0.1:41246.service: Deactivated successfully. Sep 9 00:04:57.136110 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:04:57.136810 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:04:57.137749 systemd-logind[1496]: Removed session 17. Sep 9 00:05:02.142882 systemd[1]: Started sshd@17-10.0.0.143:22-10.0.0.1:60998.service - OpenSSH per-connection server daemon (10.0.0.1:60998). Sep 9 00:05:02.186195 sshd[3640]: Accepted publickey for core from 10.0.0.1 port 60998 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:05:02.187816 sshd-session[3640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:05:02.191917 systemd-logind[1496]: New session 18 of user core. Sep 9 00:05:02.201787 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:05:02.311791 sshd[3642]: Connection closed by 10.0.0.1 port 60998 Sep 9 00:05:02.312160 sshd-session[3640]: pam_unix(sshd:session): session closed for user core Sep 9 00:05:02.316485 systemd[1]: sshd@17-10.0.0.143:22-10.0.0.1:60998.service: Deactivated successfully. Sep 9 00:05:02.318568 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:05:02.319367 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:05:02.320208 systemd-logind[1496]: Removed session 18. Sep 9 00:05:05.482974 containerd[1508]: time="2025-09-09T00:05:05.482928094Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18845594" Sep 9 00:05:05.482974 containerd[1508]: time="2025-09-09T00:05:05.482961125Z" level=error msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" failed" error="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/blobs/sha256:95566828b7f1020652d3da63aeff6d5df73d7788bd567e0e8b8ce9fa9c5099e9: 504 Gateway Time-out" Sep 9 00:05:05.483482 kubelet[2712]: E0909 00:05:05.483185 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/blobs/sha256:95566828b7f1020652d3da63aeff6d5df73d7788bd567e0e8b8ce9fa9c5099e9: 504 Gateway Time-out" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Sep 9 00:05:05.483482 kubelet[2712]: E0909 00:05:05.483244 2712 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/blobs/sha256:95566828b7f1020652d3da63aeff6d5df73d7788bd567e0e8b8ce9fa9c5099e9: 504 Gateway Time-out" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Sep 9 00:05:05.483482 kubelet[2712]: E0909 00:05:05.483346 2712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cilium-operator,Image:quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Command:[cilium-operator-generic],Args:[--config-dir=/tmp/cilium/config-map --debug=$(CILIUM_DEBUG)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:K8S_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_K8S_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_DEBUG,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:cilium-config,},Key:debug,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cilium-config-path,ReadOnly:true,MountPath:/tmp/cilium/config-map,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k92cz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 9234 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-operator-5d85765b45-gk7zf_kube-system(ff4f784b-22ec-4a63-97ec-f1e96a529319): ErrImagePull: failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/blobs/sha256:95566828b7f1020652d3da63aeff6d5df73d7788bd567e0e8b8ce9fa9c5099e9: 504 Gateway Time-out" logger="UnhandledError" Sep 9 00:05:05.485231 kubelet[2712]: E0909 00:05:05.485191 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/blobs/sha256:95566828b7f1020652d3da63aeff6d5df73d7788bd567e0e8b8ce9fa9c5099e9: 504 Gateway Time-out\"" pod="kube-system/cilium-operator-5d85765b45-gk7zf" podUID="ff4f784b-22ec-4a63-97ec-f1e96a529319" Sep 9 00:05:07.328095 systemd[1]: Started sshd@18-10.0.0.143:22-10.0.0.1:32780.service - OpenSSH per-connection server daemon (10.0.0.1:32780). Sep 9 00:05:07.371752 sshd[3681]: Accepted publickey for core from 10.0.0.1 port 32780 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:05:07.373313 sshd-session[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:05:07.377738 systemd-logind[1496]: New session 19 of user core. Sep 9 00:05:07.389807 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:05:07.495793 sshd[3683]: Connection closed by 10.0.0.1 port 32780 Sep 9 00:05:07.496201 sshd-session[3681]: pam_unix(sshd:session): session closed for user core Sep 9 00:05:07.500058 systemd[1]: sshd@18-10.0.0.143:22-10.0.0.1:32780.service: Deactivated successfully. Sep 9 00:05:07.502273 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:05:07.502998 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:05:07.503992 systemd-logind[1496]: Removed session 19. Sep 9 00:05:12.508725 systemd[1]: Started sshd@19-10.0.0.143:22-10.0.0.1:43626.service - OpenSSH per-connection server daemon (10.0.0.1:43626). Sep 9 00:05:12.551193 sshd[3698]: Accepted publickey for core from 10.0.0.1 port 43626 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:05:12.552535 sshd-session[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:05:12.556926 systemd-logind[1496]: New session 20 of user core. Sep 9 00:05:12.572773 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:05:12.709907 sshd[3700]: Connection closed by 10.0.0.1 port 43626 Sep 9 00:05:12.710293 sshd-session[3698]: pam_unix(sshd:session): session closed for user core Sep 9 00:05:12.714308 systemd[1]: sshd@19-10.0.0.143:22-10.0.0.1:43626.service: Deactivated successfully. Sep 9 00:05:12.716545 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:05:12.717305 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:05:12.718112 systemd-logind[1496]: Removed session 20. Sep 9 00:05:17.722780 systemd[1]: Started sshd@20-10.0.0.143:22-10.0.0.1:43640.service - OpenSSH per-connection server daemon (10.0.0.1:43640). Sep 9 00:05:17.765475 sshd[3713]: Accepted publickey for core from 10.0.0.1 port 43640 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:05:17.766849 sshd-session[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:05:17.771017 systemd-logind[1496]: New session 21 of user core. Sep 9 00:05:17.782776 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:05:17.887356 sshd[3715]: Connection closed by 10.0.0.1 port 43640 Sep 9 00:05:17.887790 sshd-session[3713]: pam_unix(sshd:session): session closed for user core Sep 9 00:05:17.891382 systemd[1]: sshd@20-10.0.0.143:22-10.0.0.1:43640.service: Deactivated successfully. Sep 9 00:05:17.893389 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:05:17.894077 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:05:17.895078 systemd-logind[1496]: Removed session 21. Sep 9 00:05:18.112384 kubelet[2712]: E0909 00:05:18.112259 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:18.113527 kubelet[2712]: E0909 00:05:18.113191 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"\"" pod="kube-system/cilium-operator-5d85765b45-gk7zf" podUID="ff4f784b-22ec-4a63-97ec-f1e96a529319" Sep 9 00:05:21.112989 kubelet[2712]: E0909 00:05:21.112940 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:22.904118 systemd[1]: Started sshd@21-10.0.0.143:22-10.0.0.1:56324.service - OpenSSH per-connection server daemon (10.0.0.1:56324). Sep 9 00:05:22.946461 sshd[3729]: Accepted publickey for core from 10.0.0.1 port 56324 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:05:22.948171 sshd-session[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:05:22.952137 systemd-logind[1496]: New session 22 of user core. Sep 9 00:05:22.960780 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:05:23.068092 sshd[3731]: Connection closed by 10.0.0.1 port 56324 Sep 9 00:05:23.068446 sshd-session[3729]: pam_unix(sshd:session): session closed for user core Sep 9 00:05:23.072119 systemd[1]: sshd@21-10.0.0.143:22-10.0.0.1:56324.service: Deactivated successfully. Sep 9 00:05:23.074125 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:05:23.074787 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:05:23.075699 systemd-logind[1496]: Removed session 22. Sep 9 00:05:24.113440 kubelet[2712]: E0909 00:05:24.113387 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:26.112922 kubelet[2712]: E0909 00:05:26.112868 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:28.081921 systemd[1]: Started sshd@22-10.0.0.143:22-10.0.0.1:56332.service - OpenSSH per-connection server daemon (10.0.0.1:56332). Sep 9 00:05:28.123933 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 56332 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:05:28.125430 sshd-session[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:05:28.129581 systemd-logind[1496]: New session 23 of user core. Sep 9 00:05:28.143780 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:05:28.245523 sshd[3746]: Connection closed by 10.0.0.1 port 56332 Sep 9 00:05:28.245949 sshd-session[3744]: pam_unix(sshd:session): session closed for user core Sep 9 00:05:28.249813 systemd[1]: sshd@22-10.0.0.143:22-10.0.0.1:56332.service: Deactivated successfully. Sep 9 00:05:28.251918 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:05:28.252692 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:05:28.253607 systemd-logind[1496]: Removed session 23. Sep 9 00:05:30.112810 kubelet[2712]: E0909 00:05:30.112772 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:31.113271 kubelet[2712]: E0909 00:05:31.113207 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:31.114150 containerd[1508]: time="2025-09-09T00:05:31.114105669Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:05:33.043763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1429127216.mount: Deactivated successfully. Sep 9 00:05:33.264825 systemd[1]: Started sshd@23-10.0.0.143:22-10.0.0.1:33946.service - OpenSSH per-connection server daemon (10.0.0.1:33946). Sep 9 00:05:33.308138 sshd[3767]: Accepted publickey for core from 10.0.0.1 port 33946 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:05:33.309778 sshd-session[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:05:33.313903 systemd-logind[1496]: New session 24 of user core. Sep 9 00:05:33.324772 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:05:33.460763 sshd[3769]: Connection closed by 10.0.0.1 port 33946 Sep 9 00:05:33.461107 sshd-session[3767]: pam_unix(sshd:session): session closed for user core Sep 9 00:05:33.465583 systemd[1]: sshd@23-10.0.0.143:22-10.0.0.1:33946.service: Deactivated successfully. Sep 9 00:05:33.467865 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:05:33.468556 systemd-logind[1496]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:05:33.469414 systemd-logind[1496]: Removed session 24. Sep 9 00:05:33.700919 containerd[1508]: time="2025-09-09T00:05:33.700784649Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:05:33.701704 containerd[1508]: time="2025-09-09T00:05:33.701630365Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 00:05:33.702726 containerd[1508]: time="2025-09-09T00:05:33.702686157Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:05:33.703941 containerd[1508]: time="2025-09-09T00:05:33.703909726Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.589749052s" Sep 9 00:05:33.703979 containerd[1508]: time="2025-09-09T00:05:33.703939381Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 00:05:33.705821 containerd[1508]: time="2025-09-09T00:05:33.705778632Z" level=info msg="CreateContainer within sandbox \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:05:33.719349 containerd[1508]: time="2025-09-09T00:05:33.719310594Z" level=info msg="CreateContainer within sandbox \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5\"" Sep 9 00:05:33.719959 containerd[1508]: time="2025-09-09T00:05:33.719934150Z" level=info msg="StartContainer for \"c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5\"" Sep 9 00:05:33.768830 systemd[1]: Started cri-containerd-c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5.scope - libcontainer container c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5. Sep 9 00:05:33.851780 containerd[1508]: time="2025-09-09T00:05:33.851713928Z" level=error msg="Failed to destroy network for sandbox \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\"" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Sep 9 00:05:33.855771 containerd[1508]: time="2025-09-09T00:05:33.855719907Z" level=error msg="Failed to destroy network for sandbox \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\"" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Sep 9 00:05:33.857772 containerd[1508]: time="2025-09-09T00:05:33.857494967Z" level=error msg="encountered an error cleaning up failed sandbox \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Sep 9 00:05:33.857772 containerd[1508]: time="2025-09-09T00:05:33.857602569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dhsm8,Uid:44cadf20-75f6-49e6-9ff8-c7d8fef6e9f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Sep 9 00:05:33.858315 kubelet[2712]: E0909 00:05:33.857899 2712 log.go:32] "RunPodSandbox from runtime service failed" err=< Sep 9 00:05:33.858315 kubelet[2712]: rpc error: code = Unknown desc = failed to setup network for sandbox "ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Sep 9 00:05:33.858315 kubelet[2712]: Is the agent running? Sep 9 00:05:33.858315 kubelet[2712]: > Sep 9 00:05:33.858315 kubelet[2712]: E0909 00:05:33.857973 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Sep 9 00:05:33.858315 kubelet[2712]: rpc error: code = Unknown desc = failed to setup network for sandbox "ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Sep 9 00:05:33.858315 kubelet[2712]: Is the agent running? Sep 9 00:05:33.858315 kubelet[2712]: > pod="kube-system/coredns-7c65d6cfc9-dhsm8" Sep 9 00:05:33.858315 kubelet[2712]: E0909 00:05:33.857992 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Sep 9 00:05:33.858315 kubelet[2712]: rpc error: code = Unknown desc = failed to setup network for sandbox "ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Sep 9 00:05:33.858315 kubelet[2712]: Is the agent running? Sep 9 00:05:33.858315 kubelet[2712]: > pod="kube-system/coredns-7c65d6cfc9-dhsm8" Sep 9 00:05:33.858315 kubelet[2712]: E0909 00:05:33.858030 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dhsm8_kube-system(44cadf20-75f6-49e6-9ff8-c7d8fef6e9f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dhsm8_kube-system(44cadf20-75f6-49e6-9ff8-c7d8fef6e9f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-7c65d6cfc9-dhsm8" podUID="44cadf20-75f6-49e6-9ff8-c7d8fef6e9f9" Sep 9 00:05:33.859945 kubelet[2712]: E0909 00:05:33.859761 2712 log.go:32] "RunPodSandbox from runtime service failed" err=< Sep 9 00:05:33.859945 kubelet[2712]: rpc error: code = Unknown desc = failed to setup network for sandbox "a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Sep 9 00:05:33.859945 kubelet[2712]: Is the agent running? Sep 9 00:05:33.859945 kubelet[2712]: > Sep 9 00:05:33.859945 kubelet[2712]: E0909 00:05:33.859803 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Sep 9 00:05:33.859945 kubelet[2712]: rpc error: code = Unknown desc = failed to setup network for sandbox "a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Sep 9 00:05:33.859945 kubelet[2712]: Is the agent running? Sep 9 00:05:33.859945 kubelet[2712]: > pod="kube-system/coredns-7c65d6cfc9-68p2d" Sep 9 00:05:33.859945 kubelet[2712]: E0909 00:05:33.859817 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Sep 9 00:05:33.859945 kubelet[2712]: rpc error: code = Unknown desc = failed to setup network for sandbox "a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Sep 9 00:05:33.859945 kubelet[2712]: Is the agent running? Sep 9 00:05:33.859945 kubelet[2712]: > pod="kube-system/coredns-7c65d6cfc9-68p2d" Sep 9 00:05:33.859945 kubelet[2712]: E0909 00:05:33.859847 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-68p2d_kube-system(1740c2bf-9f72-4d86-91f2-c4689766092b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-68p2d_kube-system(1740c2bf-9f72-4d86-91f2-c4689766092b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-7c65d6cfc9-68p2d" podUID="1740c2bf-9f72-4d86-91f2-c4689766092b" Sep 9 00:05:33.860436 containerd[1508]: time="2025-09-09T00:05:33.858398711Z" level=error msg="encountered an error cleaning up failed sandbox \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Sep 9 00:05:33.860436 containerd[1508]: time="2025-09-09T00:05:33.858508478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-68p2d,Uid:1740c2bf-9f72-4d86-91f2-c4689766092b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Sep 9 00:05:33.907169 containerd[1508]: time="2025-09-09T00:05:33.907045126Z" level=info msg="StartContainer for \"c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5\" returns successfully" Sep 9 00:05:34.028440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22-shm.mount: Deactivated successfully. Sep 9 00:05:34.028604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c-shm.mount: Deactivated successfully. Sep 9 00:05:34.599706 kubelet[2712]: I0909 00:05:34.598263 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c" Sep 9 00:05:34.604465 kubelet[2712]: E0909 00:05:34.604409 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:34.605751 kubelet[2712]: I0909 00:05:34.605719 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22" Sep 9 00:05:34.606698 containerd[1508]: time="2025-09-09T00:05:34.606166884Z" level=info msg="StopPodSandbox for \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\"" Sep 9 00:05:34.607139 containerd[1508]: time="2025-09-09T00:05:34.607070188Z" level=info msg="StopPodSandbox for \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\"" Sep 9 00:05:34.623880 containerd[1508]: time="2025-09-09T00:05:34.623732103Z" level=info msg="Ensure that sandbox a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c in task-service has been cleanup successfully" Sep 9 00:05:34.624175 containerd[1508]: time="2025-09-09T00:05:34.624135554Z" level=info msg="TearDown network for sandbox \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\" successfully" Sep 9 00:05:34.624175 containerd[1508]: time="2025-09-09T00:05:34.624160051Z" level=info msg="StopPodSandbox for \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\" returns successfully" Sep 9 00:05:34.626666 kubelet[2712]: E0909 00:05:34.624663 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:34.626752 containerd[1508]: time="2025-09-09T00:05:34.625161941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-68p2d,Uid:1740c2bf-9f72-4d86-91f2-c4689766092b,Namespace:kube-system,Attempt:1,}" Sep 9 00:05:34.626752 containerd[1508]: time="2025-09-09T00:05:34.626084381Z" level=info msg="Ensure that sandbox ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22 in task-service has been cleanup successfully" Sep 9 00:05:34.626752 containerd[1508]: time="2025-09-09T00:05:34.626496288Z" level=info msg="TearDown network for sandbox \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\" successfully" Sep 9 00:05:34.626752 containerd[1508]: time="2025-09-09T00:05:34.626516666Z" level=info msg="StopPodSandbox for \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\" returns successfully" Sep 9 00:05:34.627036 kubelet[2712]: E0909 00:05:34.626995 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:34.627334 systemd[1]: run-netns-cni\x2d7fd87307\x2ded76\x2d4b5a\x2d4489\x2d87df4dd88d12.mount: Deactivated successfully. Sep 9 00:05:34.627714 containerd[1508]: time="2025-09-09T00:05:34.627491506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dhsm8,Uid:44cadf20-75f6-49e6-9ff8-c7d8fef6e9f9,Namespace:kube-system,Attempt:1,}" Sep 9 00:05:34.631950 systemd[1]: run-netns-cni\x2d89149462\x2dfc34\x2d5306\x2d182a\x2db93389977f08.mount: Deactivated successfully. Sep 9 00:05:37.923285 systemd-networkd[1436]: cilium_host: Link UP Sep 9 00:05:37.924140 systemd-networkd[1436]: cilium_net: Link UP Sep 9 00:05:37.924686 systemd-networkd[1436]: cilium_net: Gained carrier Sep 9 00:05:37.925191 systemd-networkd[1436]: cilium_host: Gained carrier Sep 9 00:05:38.030472 systemd-networkd[1436]: cilium_vxlan: Link UP Sep 9 00:05:38.030485 systemd-networkd[1436]: cilium_vxlan: Gained carrier Sep 9 00:05:38.179812 systemd-networkd[1436]: cilium_host: Gained IPv6LL Sep 9 00:05:38.249706 kernel: NET: Registered PF_ALG protocol family Sep 9 00:05:38.478846 systemd[1]: Started sshd@24-10.0.0.143:22-10.0.0.1:33956.service - OpenSSH per-connection server daemon (10.0.0.1:33956). Sep 9 00:05:38.529772 sshd[3991]: Accepted publickey for core from 10.0.0.1 port 33956 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:05:38.531777 sshd-session[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:05:38.537662 systemd-logind[1496]: New session 25 of user core. Sep 9 00:05:38.543812 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:05:38.718297 sshd[4055]: Connection closed by 10.0.0.1 port 33956 Sep 9 00:05:38.718749 sshd-session[3991]: pam_unix(sshd:session): session closed for user core Sep 9 00:05:38.722893 systemd[1]: sshd@24-10.0.0.143:22-10.0.0.1:33956.service: Deactivated successfully. Sep 9 00:05:38.724946 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:05:38.725632 systemd-logind[1496]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:05:38.726800 systemd-logind[1496]: Removed session 25. Sep 9 00:05:38.882849 systemd-networkd[1436]: cilium_net: Gained IPv6LL Sep 9 00:05:38.941925 systemd-networkd[1436]: lxc_health: Link UP Sep 9 00:05:38.943452 systemd-networkd[1436]: lxc_health: Gained carrier Sep 9 00:05:39.196303 kubelet[2712]: E0909 00:05:39.196040 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:39.248685 kernel: eth0: renamed from tmpc00e8 Sep 9 00:05:39.265446 systemd-networkd[1436]: lxcb89a642daead: Link UP Sep 9 00:05:39.265961 systemd-networkd[1436]: lxcb89a642daead: Gained carrier Sep 9 00:05:39.267254 systemd-networkd[1436]: lxcf538ad68e451: Link UP Sep 9 00:05:39.269672 kernel: eth0: renamed from tmp9a9d6 Sep 9 00:05:39.273805 systemd-networkd[1436]: lxcf538ad68e451: Gained carrier Sep 9 00:05:39.316039 kubelet[2712]: I0909 00:05:39.315949 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-gk7zf" podStartSLOduration=7.41931637 podStartE2EDuration="1m33.315929725s" podCreationTimestamp="2025-09-09 00:04:06 +0000 UTC" firstStartedPulling="2025-09-09 00:04:07.8080118 +0000 UTC m=+5.778391549" lastFinishedPulling="2025-09-09 00:05:33.704625155 +0000 UTC m=+91.675004904" observedRunningTime="2025-09-09 00:05:34.634975626 +0000 UTC m=+92.605355395" watchObservedRunningTime="2025-09-09 00:05:39.315929725 +0000 UTC m=+97.286309474" Sep 9 00:05:39.621171 kubelet[2712]: E0909 00:05:39.620953 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:40.037779 systemd-networkd[1436]: cilium_vxlan: Gained IPv6LL Sep 9 00:05:40.802909 systemd-networkd[1436]: lxc_health: Gained IPv6LL Sep 9 00:05:40.866891 systemd-networkd[1436]: lxcf538ad68e451: Gained IPv6LL Sep 9 00:05:40.994884 systemd-networkd[1436]: lxcb89a642daead: Gained IPv6LL Sep 9 00:05:43.738422 systemd[1]: Started sshd@25-10.0.0.143:22-10.0.0.1:46334.service - OpenSSH per-connection server daemon (10.0.0.1:46334). Sep 9 00:05:43.791524 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 46334 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:05:43.793811 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:05:43.799112 systemd-logind[1496]: New session 26 of user core. Sep 9 00:05:43.809800 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:05:43.875856 containerd[1508]: time="2025-09-09T00:05:43.875599488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:05:43.875856 containerd[1508]: time="2025-09-09T00:05:43.875685149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:05:43.875856 containerd[1508]: time="2025-09-09T00:05:43.875695990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:05:43.875856 containerd[1508]: time="2025-09-09T00:05:43.875781100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:05:43.897573 systemd[1]: run-containerd-runc-k8s.io-9a9d647c9f50d000f27ce1f75c360b85cac20f64d177200bd13829a883bae863-runc.mohbRT.mount: Deactivated successfully. Sep 9 00:05:43.908954 systemd[1]: Started cri-containerd-9a9d647c9f50d000f27ce1f75c360b85cac20f64d177200bd13829a883bae863.scope - libcontainer container 9a9d647c9f50d000f27ce1f75c360b85cac20f64d177200bd13829a883bae863. Sep 9 00:05:43.927617 containerd[1508]: time="2025-09-09T00:05:43.927154233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:05:43.928693 containerd[1508]: time="2025-09-09T00:05:43.927702918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:05:43.928834 containerd[1508]: time="2025-09-09T00:05:43.928779368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:05:43.928918 containerd[1508]: time="2025-09-09T00:05:43.928882592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:05:43.933891 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:05:43.967983 systemd[1]: Started cri-containerd-c00e8ab0ddb89a443bac7fe898fb6c482924911bb7b1fc7aa516631d78af5f79.scope - libcontainer container c00e8ab0ddb89a443bac7fe898fb6c482924911bb7b1fc7aa516631d78af5f79. Sep 9 00:05:43.977960 containerd[1508]: time="2025-09-09T00:05:43.977844267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dhsm8,Uid:44cadf20-75f6-49e6-9ff8-c7d8fef6e9f9,Namespace:kube-system,Attempt:1,} returns sandbox id \"9a9d647c9f50d000f27ce1f75c360b85cac20f64d177200bd13829a883bae863\"" Sep 9 00:05:43.978250 sshd[4267]: Connection closed by 10.0.0.1 port 46334 Sep 9 00:05:43.979984 kubelet[2712]: E0909 00:05:43.979105 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:43.979185 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Sep 9 00:05:43.985609 systemd[1]: sshd@25-10.0.0.143:22-10.0.0.1:46334.service: Deactivated successfully. Sep 9 00:05:43.988950 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:05:43.990123 containerd[1508]: time="2025-09-09T00:05:43.989835398Z" level=info msg="CreateContainer within sandbox \"9a9d647c9f50d000f27ce1f75c360b85cac20f64d177200bd13829a883bae863\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:05:43.990438 systemd-logind[1496]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:05:43.994298 systemd-logind[1496]: Removed session 26. Sep 9 00:05:43.995339 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:05:44.023716 containerd[1508]: time="2025-09-09T00:05:44.023629992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-68p2d,Uid:1740c2bf-9f72-4d86-91f2-c4689766092b,Namespace:kube-system,Attempt:1,} returns sandbox id \"c00e8ab0ddb89a443bac7fe898fb6c482924911bb7b1fc7aa516631d78af5f79\"" Sep 9 00:05:44.024984 kubelet[2712]: E0909 00:05:44.024913 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:44.027125 containerd[1508]: time="2025-09-09T00:05:44.027073084Z" level=info msg="CreateContainer within sandbox \"c00e8ab0ddb89a443bac7fe898fb6c482924911bb7b1fc7aa516631d78af5f79\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:05:45.161412 containerd[1508]: time="2025-09-09T00:05:45.161336586Z" level=info msg="CreateContainer within sandbox \"9a9d647c9f50d000f27ce1f75c360b85cac20f64d177200bd13829a883bae863\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf9486e956860d2fe8f550eafb6c28947685cb7a8981d13480f3c7abd3a04888\"" Sep 9 00:05:45.162140 containerd[1508]: time="2025-09-09T00:05:45.162091809Z" level=info msg="StartContainer for \"cf9486e956860d2fe8f550eafb6c28947685cb7a8981d13480f3c7abd3a04888\"" Sep 9 00:05:45.198852 systemd[1]: Started cri-containerd-cf9486e956860d2fe8f550eafb6c28947685cb7a8981d13480f3c7abd3a04888.scope - libcontainer container cf9486e956860d2fe8f550eafb6c28947685cb7a8981d13480f3c7abd3a04888. Sep 9 00:05:45.493399 containerd[1508]: time="2025-09-09T00:05:45.493332397Z" level=info msg="CreateContainer within sandbox \"c00e8ab0ddb89a443bac7fe898fb6c482924911bb7b1fc7aa516631d78af5f79\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c662b8fe213735907ad6f9913b8e7f3c46f9b9cd92bf0d0778ce168fa8d242e\"" Sep 9 00:05:45.493591 containerd[1508]: time="2025-09-09T00:05:45.493515102Z" level=info msg="StartContainer for \"cf9486e956860d2fe8f550eafb6c28947685cb7a8981d13480f3c7abd3a04888\" returns successfully" Sep 9 00:05:45.494171 containerd[1508]: time="2025-09-09T00:05:45.494119712Z" level=info msg="StartContainer for \"6c662b8fe213735907ad6f9913b8e7f3c46f9b9cd92bf0d0778ce168fa8d242e\"" Sep 9 00:05:45.534058 systemd[1]: Started cri-containerd-6c662b8fe213735907ad6f9913b8e7f3c46f9b9cd92bf0d0778ce168fa8d242e.scope - libcontainer container 6c662b8fe213735907ad6f9913b8e7f3c46f9b9cd92bf0d0778ce168fa8d242e. Sep 9 00:05:45.717749 containerd[1508]: time="2025-09-09T00:05:45.717700640Z" level=info msg="StartContainer for \"6c662b8fe213735907ad6f9913b8e7f3c46f9b9cd92bf0d0778ce168fa8d242e\" returns successfully" Sep 9 00:05:45.720933 kubelet[2712]: E0909 00:05:45.720688 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:45.722666 kubelet[2712]: E0909 00:05:45.722477 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:45.897468 kubelet[2712]: I0909 00:05:45.897315 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dhsm8" podStartSLOduration=99.897300272 podStartE2EDuration="1m39.897300272s" podCreationTimestamp="2025-09-09 00:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:05:45.896358486 +0000 UTC m=+103.866738235" watchObservedRunningTime="2025-09-09 00:05:45.897300272 +0000 UTC m=+103.867680021" Sep 9 00:05:46.724732 kubelet[2712]: E0909 00:05:46.724696 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:46.725432 kubelet[2712]: E0909 00:05:46.724764 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:47.726234 kubelet[2712]: E0909 00:05:47.726116 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:48.068351 kubelet[2712]: I0909 00:05:48.067997 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-68p2d" podStartSLOduration=102.067974495 podStartE2EDuration="1m42.067974495s" podCreationTimestamp="2025-09-09 00:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:05:46.075514826 +0000 UTC m=+104.045894575" watchObservedRunningTime="2025-09-09 00:05:48.067974495 +0000 UTC m=+106.038354244" Sep 9 00:05:48.728021 kubelet[2712]: E0909 00:05:48.727993 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:48.994694 systemd[1]: Started sshd@26-10.0.0.143:22-10.0.0.1:46342.service - OpenSSH per-connection server daemon (10.0.0.1:46342). Sep 9 00:05:49.040903 sshd[4452]: Accepted publickey for core from 10.0.0.1 port 46342 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:05:49.042464 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:05:49.046960 systemd-logind[1496]: New session 27 of user core. Sep 9 00:05:49.057810 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 00:05:49.174784 sshd[4454]: Connection closed by 10.0.0.1 port 46342 Sep 9 00:05:49.175131 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Sep 9 00:05:49.179014 systemd[1]: sshd@26-10.0.0.143:22-10.0.0.1:46342.service: Deactivated successfully. Sep 9 00:05:49.181328 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 00:05:49.182017 systemd-logind[1496]: Session 27 logged out. Waiting for processes to exit. Sep 9 00:05:49.182843 systemd-logind[1496]: Removed session 27. Sep 9 00:05:53.348147 kubelet[2712]: E0909 00:05:53.348103 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:05:54.187974 systemd[1]: Started sshd@27-10.0.0.143:22-10.0.0.1:33506.service - OpenSSH per-connection server daemon (10.0.0.1:33506). Sep 9 00:05:54.233989 sshd[4475]: Accepted publickey for core from 10.0.0.1 port 33506 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:05:54.235631 sshd-session[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:05:54.240364 systemd-logind[1496]: New session 28 of user core. Sep 9 00:05:54.251773 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 00:05:54.463893 sshd[4478]: Connection closed by 10.0.0.1 port 33506 Sep 9 00:05:54.464186 sshd-session[4475]: pam_unix(sshd:session): session closed for user core Sep 9 00:05:54.468407 systemd[1]: sshd@27-10.0.0.143:22-10.0.0.1:33506.service: Deactivated successfully. Sep 9 00:05:54.470506 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 00:05:54.471265 systemd-logind[1496]: Session 28 logged out. Waiting for processes to exit. Sep 9 00:05:54.472258 systemd-logind[1496]: Removed session 28. Sep 9 00:05:59.476726 systemd[1]: Started sshd@28-10.0.0.143:22-10.0.0.1:33514.service - OpenSSH per-connection server daemon (10.0.0.1:33514). Sep 9 00:05:59.519751 sshd[4492]: Accepted publickey for core from 10.0.0.1 port 33514 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:05:59.521272 sshd-session[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:05:59.525693 systemd-logind[1496]: New session 29 of user core. Sep 9 00:05:59.535800 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 00:05:59.649228 sshd[4494]: Connection closed by 10.0.0.1 port 33514 Sep 9 00:05:59.649674 sshd-session[4492]: pam_unix(sshd:session): session closed for user core Sep 9 00:05:59.654103 systemd[1]: sshd@28-10.0.0.143:22-10.0.0.1:33514.service: Deactivated successfully. Sep 9 00:05:59.656463 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 00:05:59.657216 systemd-logind[1496]: Session 29 logged out. Waiting for processes to exit. Sep 9 00:05:59.658184 systemd-logind[1496]: Removed session 29. Sep 9 00:06:02.105591 containerd[1508]: time="2025-09-09T00:06:02.105117568Z" level=info msg="StopPodSandbox for \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\"" Sep 9 00:06:02.129126 containerd[1508]: time="2025-09-09T00:06:02.105372278Z" level=info msg="TearDown network for sandbox \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\" successfully" Sep 9 00:06:02.129126 containerd[1508]: time="2025-09-09T00:06:02.128142986Z" level=info msg="StopPodSandbox for \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\" returns successfully" Sep 9 00:06:02.132130 containerd[1508]: time="2025-09-09T00:06:02.130466394Z" level=info msg="RemovePodSandbox for \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\"" Sep 9 00:06:02.132130 containerd[1508]: time="2025-09-09T00:06:02.130513492Z" level=info msg="Forcibly stopping sandbox \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\"" Sep 9 00:06:02.132130 containerd[1508]: time="2025-09-09T00:06:02.130707007Z" level=info msg="TearDown network for sandbox \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\" successfully" Sep 9 00:06:02.143601 containerd[1508]: time="2025-09-09T00:06:02.143515853Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:06:02.143834 containerd[1508]: time="2025-09-09T00:06:02.143625329Z" level=info msg="RemovePodSandbox \"ec97ad1102f80e3de4c8ff8d79301e40542a22c9ec492f6afc8745f922585a22\" returns successfully" Sep 9 00:06:02.152575 containerd[1508]: time="2025-09-09T00:06:02.146688722Z" level=info msg="StopPodSandbox for \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\"" Sep 9 00:06:02.152575 containerd[1508]: time="2025-09-09T00:06:02.146828916Z" level=info msg="TearDown network for sandbox \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\" successfully" Sep 9 00:06:02.152575 containerd[1508]: time="2025-09-09T00:06:02.146842542Z" level=info msg="StopPodSandbox for \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\" returns successfully" Sep 9 00:06:02.152575 containerd[1508]: time="2025-09-09T00:06:02.149247663Z" level=info msg="RemovePodSandbox for \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\"" Sep 9 00:06:02.152575 containerd[1508]: time="2025-09-09T00:06:02.149273473Z" level=info msg="Forcibly stopping sandbox \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\"" Sep 9 00:06:02.152575 containerd[1508]: time="2025-09-09T00:06:02.149384392Z" level=info msg="TearDown network for sandbox \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\" successfully" Sep 9 00:06:02.161411 containerd[1508]: time="2025-09-09T00:06:02.161361560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:06:02.161816 containerd[1508]: time="2025-09-09T00:06:02.161444005Z" level=info msg="RemovePodSandbox \"a5f164b28207197e3fd75106ec8a2c5f9ab3c1dc00cf4b974f7178c83645b48c\" returns successfully" Sep 9 00:06:04.663189 systemd[1]: Started sshd@29-10.0.0.143:22-10.0.0.1:57030.service - OpenSSH per-connection server daemon (10.0.0.1:57030). Sep 9 00:06:04.713550 sshd[4509]: Accepted publickey for core from 10.0.0.1 port 57030 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:04.715558 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:04.721007 systemd-logind[1496]: New session 30 of user core. Sep 9 00:06:04.729849 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 9 00:06:04.953491 sshd[4511]: Connection closed by 10.0.0.1 port 57030 Sep 9 00:06:04.954189 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:04.960947 systemd[1]: sshd@29-10.0.0.143:22-10.0.0.1:57030.service: Deactivated successfully. Sep 9 00:06:04.964368 systemd[1]: session-30.scope: Deactivated successfully. Sep 9 00:06:04.965429 systemd-logind[1496]: Session 30 logged out. Waiting for processes to exit. Sep 9 00:06:04.967437 systemd-logind[1496]: Removed session 30. Sep 9 00:06:09.968450 systemd[1]: Started sshd@30-10.0.0.143:22-10.0.0.1:34552.service - OpenSSH per-connection server daemon (10.0.0.1:34552). Sep 9 00:06:10.011376 sshd[4526]: Accepted publickey for core from 10.0.0.1 port 34552 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:10.012760 sshd-session[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:10.016993 systemd-logind[1496]: New session 31 of user core. Sep 9 00:06:10.034791 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 9 00:06:10.183243 sshd[4528]: Connection closed by 10.0.0.1 port 34552 Sep 9 00:06:10.183638 sshd-session[4526]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:10.187423 systemd[1]: sshd@30-10.0.0.143:22-10.0.0.1:34552.service: Deactivated successfully. Sep 9 00:06:10.189475 systemd[1]: session-31.scope: Deactivated successfully. Sep 9 00:06:10.190144 systemd-logind[1496]: Session 31 logged out. Waiting for processes to exit. Sep 9 00:06:10.191042 systemd-logind[1496]: Removed session 31. Sep 9 00:06:15.199043 systemd[1]: Started sshd@31-10.0.0.143:22-10.0.0.1:34554.service - OpenSSH per-connection server daemon (10.0.0.1:34554). Sep 9 00:06:15.242583 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 34554 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:15.243988 sshd-session[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:15.248207 systemd-logind[1496]: New session 32 of user core. Sep 9 00:06:15.257801 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 9 00:06:15.363925 sshd[4543]: Connection closed by 10.0.0.1 port 34554 Sep 9 00:06:15.364322 sshd-session[4541]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:15.378476 systemd[1]: sshd@31-10.0.0.143:22-10.0.0.1:34554.service: Deactivated successfully. Sep 9 00:06:15.380695 systemd[1]: session-32.scope: Deactivated successfully. Sep 9 00:06:15.382377 systemd-logind[1496]: Session 32 logged out. Waiting for processes to exit. Sep 9 00:06:15.393229 systemd[1]: Started sshd@32-10.0.0.143:22-10.0.0.1:34564.service - OpenSSH per-connection server daemon (10.0.0.1:34564). Sep 9 00:06:15.394455 systemd-logind[1496]: Removed session 32. Sep 9 00:06:15.431091 sshd[4555]: Accepted publickey for core from 10.0.0.1 port 34564 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:15.432452 sshd-session[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:15.436989 systemd-logind[1496]: New session 33 of user core. Sep 9 00:06:15.446774 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 9 00:06:16.273735 sshd[4558]: Connection closed by 10.0.0.1 port 34564 Sep 9 00:06:16.274134 sshd-session[4555]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:16.286626 systemd[1]: sshd@32-10.0.0.143:22-10.0.0.1:34564.service: Deactivated successfully. Sep 9 00:06:16.288941 systemd[1]: session-33.scope: Deactivated successfully. Sep 9 00:06:16.290619 systemd-logind[1496]: Session 33 logged out. Waiting for processes to exit. Sep 9 00:06:16.301915 systemd[1]: Started sshd@33-10.0.0.143:22-10.0.0.1:34580.service - OpenSSH per-connection server daemon (10.0.0.1:34580). Sep 9 00:06:16.302932 systemd-logind[1496]: Removed session 33. Sep 9 00:06:16.345481 sshd[4568]: Accepted publickey for core from 10.0.0.1 port 34580 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:16.346905 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:16.351198 systemd-logind[1496]: New session 34 of user core. Sep 9 00:06:16.360783 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 9 00:06:18.143602 sshd[4571]: Connection closed by 10.0.0.1 port 34580 Sep 9 00:06:18.145472 sshd-session[4568]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:18.155607 systemd[1]: sshd@33-10.0.0.143:22-10.0.0.1:34580.service: Deactivated successfully. Sep 9 00:06:18.157627 systemd[1]: session-34.scope: Deactivated successfully. Sep 9 00:06:18.158308 systemd-logind[1496]: Session 34 logged out. Waiting for processes to exit. Sep 9 00:06:18.165145 systemd[1]: Started sshd@34-10.0.0.143:22-10.0.0.1:34582.service - OpenSSH per-connection server daemon (10.0.0.1:34582). Sep 9 00:06:18.166139 systemd-logind[1496]: Removed session 34. Sep 9 00:06:18.203491 sshd[4607]: Accepted publickey for core from 10.0.0.1 port 34582 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:18.205041 sshd-session[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:18.209286 systemd-logind[1496]: New session 35 of user core. Sep 9 00:06:18.215778 systemd[1]: Started session-35.scope - Session 35 of User core. Sep 9 00:06:18.489340 sshd[4610]: Connection closed by 10.0.0.1 port 34582 Sep 9 00:06:18.491288 sshd-session[4607]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:18.500747 systemd[1]: sshd@34-10.0.0.143:22-10.0.0.1:34582.service: Deactivated successfully. Sep 9 00:06:18.502950 systemd[1]: session-35.scope: Deactivated successfully. Sep 9 00:06:18.503930 systemd-logind[1496]: Session 35 logged out. Waiting for processes to exit. Sep 9 00:06:18.512042 systemd[1]: Started sshd@35-10.0.0.143:22-10.0.0.1:34584.service - OpenSSH per-connection server daemon (10.0.0.1:34584). Sep 9 00:06:18.513246 systemd-logind[1496]: Removed session 35. Sep 9 00:06:18.551025 sshd[4620]: Accepted publickey for core from 10.0.0.1 port 34584 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:18.552568 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:18.557041 systemd-logind[1496]: New session 36 of user core. Sep 9 00:06:18.566789 systemd[1]: Started session-36.scope - Session 36 of User core. Sep 9 00:06:18.676047 sshd[4623]: Connection closed by 10.0.0.1 port 34584 Sep 9 00:06:18.676458 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:18.680926 systemd[1]: sshd@35-10.0.0.143:22-10.0.0.1:34584.service: Deactivated successfully. Sep 9 00:06:18.683387 systemd[1]: session-36.scope: Deactivated successfully. Sep 9 00:06:18.684136 systemd-logind[1496]: Session 36 logged out. Waiting for processes to exit. Sep 9 00:06:18.685328 systemd-logind[1496]: Removed session 36. Sep 9 00:06:23.112758 kubelet[2712]: E0909 00:06:23.112700 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:06:23.690376 systemd[1]: Started sshd@36-10.0.0.143:22-10.0.0.1:53100.service - OpenSSH per-connection server daemon (10.0.0.1:53100). Sep 9 00:06:23.732708 sshd[4636]: Accepted publickey for core from 10.0.0.1 port 53100 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:23.734377 sshd-session[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:23.738423 systemd-logind[1496]: New session 37 of user core. Sep 9 00:06:23.751774 systemd[1]: Started session-37.scope - Session 37 of User core. Sep 9 00:06:23.879502 sshd[4638]: Connection closed by 10.0.0.1 port 53100 Sep 9 00:06:23.879916 sshd-session[4636]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:23.883733 systemd[1]: sshd@36-10.0.0.143:22-10.0.0.1:53100.service: Deactivated successfully. Sep 9 00:06:23.885814 systemd[1]: session-37.scope: Deactivated successfully. Sep 9 00:06:23.886455 systemd-logind[1496]: Session 37 logged out. Waiting for processes to exit. Sep 9 00:06:23.887316 systemd-logind[1496]: Removed session 37. Sep 9 00:06:28.112744 kubelet[2712]: E0909 00:06:28.112696 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:06:28.893198 systemd[1]: Started sshd@37-10.0.0.143:22-10.0.0.1:53112.service - OpenSSH per-connection server daemon (10.0.0.1:53112). Sep 9 00:06:28.935634 sshd[4651]: Accepted publickey for core from 10.0.0.1 port 53112 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:28.937253 sshd-session[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:28.941445 systemd-logind[1496]: New session 38 of user core. Sep 9 00:06:28.952777 systemd[1]: Started session-38.scope - Session 38 of User core. Sep 9 00:06:29.122168 sshd[4653]: Connection closed by 10.0.0.1 port 53112 Sep 9 00:06:29.122531 sshd-session[4651]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:29.126218 systemd[1]: sshd@37-10.0.0.143:22-10.0.0.1:53112.service: Deactivated successfully. Sep 9 00:06:29.128243 systemd[1]: session-38.scope: Deactivated successfully. Sep 9 00:06:29.128940 systemd-logind[1496]: Session 38 logged out. Waiting for processes to exit. Sep 9 00:06:29.129751 systemd-logind[1496]: Removed session 38. Sep 9 00:06:34.141843 systemd[1]: Started sshd@38-10.0.0.143:22-10.0.0.1:43022.service - OpenSSH per-connection server daemon (10.0.0.1:43022). Sep 9 00:06:34.201084 sshd[4669]: Accepted publickey for core from 10.0.0.1 port 43022 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:34.203309 sshd-session[4669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:34.209778 systemd-logind[1496]: New session 39 of user core. Sep 9 00:06:34.220865 systemd[1]: Started session-39.scope - Session 39 of User core. Sep 9 00:06:34.384603 sshd[4671]: Connection closed by 10.0.0.1 port 43022 Sep 9 00:06:34.385091 sshd-session[4669]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:34.390380 systemd[1]: sshd@38-10.0.0.143:22-10.0.0.1:43022.service: Deactivated successfully. Sep 9 00:06:34.393049 systemd[1]: session-39.scope: Deactivated successfully. Sep 9 00:06:34.394242 systemd-logind[1496]: Session 39 logged out. Waiting for processes to exit. Sep 9 00:06:34.395431 systemd-logind[1496]: Removed session 39. Sep 9 00:06:39.398050 systemd[1]: Started sshd@39-10.0.0.143:22-10.0.0.1:43028.service - OpenSSH per-connection server daemon (10.0.0.1:43028). Sep 9 00:06:39.442618 sshd[4688]: Accepted publickey for core from 10.0.0.1 port 43028 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:39.444224 sshd-session[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:39.448758 systemd-logind[1496]: New session 40 of user core. Sep 9 00:06:39.459818 systemd[1]: Started session-40.scope - Session 40 of User core. Sep 9 00:06:39.566110 sshd[4690]: Connection closed by 10.0.0.1 port 43028 Sep 9 00:06:39.566478 sshd-session[4688]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:39.571014 systemd[1]: sshd@39-10.0.0.143:22-10.0.0.1:43028.service: Deactivated successfully. Sep 9 00:06:39.573323 systemd[1]: session-40.scope: Deactivated successfully. Sep 9 00:06:39.574080 systemd-logind[1496]: Session 40 logged out. Waiting for processes to exit. Sep 9 00:06:39.574960 systemd-logind[1496]: Removed session 40. Sep 9 00:06:42.113682 kubelet[2712]: E0909 00:06:42.113618 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:06:44.579929 systemd[1]: Started sshd@40-10.0.0.143:22-10.0.0.1:59270.service - OpenSSH per-connection server daemon (10.0.0.1:59270). Sep 9 00:06:44.623104 sshd[4703]: Accepted publickey for core from 10.0.0.1 port 59270 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:44.624657 sshd-session[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:44.628965 systemd-logind[1496]: New session 41 of user core. Sep 9 00:06:44.635780 systemd[1]: Started session-41.scope - Session 41 of User core. Sep 9 00:06:44.739676 sshd[4705]: Connection closed by 10.0.0.1 port 59270 Sep 9 00:06:44.740095 sshd-session[4703]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:44.761782 systemd[1]: sshd@40-10.0.0.143:22-10.0.0.1:59270.service: Deactivated successfully. Sep 9 00:06:44.764110 systemd[1]: session-41.scope: Deactivated successfully. Sep 9 00:06:44.765979 systemd-logind[1496]: Session 41 logged out. Waiting for processes to exit. Sep 9 00:06:44.773885 systemd[1]: Started sshd@41-10.0.0.143:22-10.0.0.1:59278.service - OpenSSH per-connection server daemon (10.0.0.1:59278). Sep 9 00:06:44.774730 systemd-logind[1496]: Removed session 41. Sep 9 00:06:44.813234 sshd[4717]: Accepted publickey for core from 10.0.0.1 port 59278 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:44.814609 sshd-session[4717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:44.819107 systemd-logind[1496]: New session 42 of user core. Sep 9 00:06:44.832781 systemd[1]: Started session-42.scope - Session 42 of User core. Sep 9 00:06:46.802038 systemd[1]: run-containerd-runc-k8s.io-a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56-runc.5lV0fy.mount: Deactivated successfully. Sep 9 00:06:46.812994 containerd[1508]: time="2025-09-09T00:06:46.812953438Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:06:46.828844 containerd[1508]: time="2025-09-09T00:06:46.827953840Z" level=info msg="StopContainer for \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\" with timeout 2 (s)" Sep 9 00:06:46.829151 containerd[1508]: time="2025-09-09T00:06:46.829040321Z" level=info msg="Stop container \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\" with signal terminated" Sep 9 00:06:46.838506 systemd-networkd[1436]: lxc_health: Link DOWN Sep 9 00:06:46.838517 systemd-networkd[1436]: lxc_health: Lost carrier Sep 9 00:06:46.857034 containerd[1508]: time="2025-09-09T00:06:46.856100470Z" level=info msg="StopContainer for \"c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5\" with timeout 30 (s)" Sep 9 00:06:46.857034 containerd[1508]: time="2025-09-09T00:06:46.856908678Z" level=info msg="Stop container \"c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5\" with signal terminated" Sep 9 00:06:46.859514 systemd[1]: cri-containerd-a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56.scope: Deactivated successfully. Sep 9 00:06:46.859913 systemd[1]: cri-containerd-a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56.scope: Consumed 7.959s CPU time, 124.3M memory peak, 240K read from disk, 13.3M written to disk. Sep 9 00:06:46.872838 systemd[1]: cri-containerd-c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5.scope: Deactivated successfully. Sep 9 00:06:46.884886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56-rootfs.mount: Deactivated successfully. Sep 9 00:06:46.897347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5-rootfs.mount: Deactivated successfully. Sep 9 00:06:47.177901 kubelet[2712]: E0909 00:06:47.177764 2712 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:06:47.354855 containerd[1508]: time="2025-09-09T00:06:47.354760002Z" level=info msg="shim disconnected" id=a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56 namespace=k8s.io Sep 9 00:06:47.354855 containerd[1508]: time="2025-09-09T00:06:47.354837531Z" level=warning msg="cleaning up after shim disconnected" id=a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56 namespace=k8s.io Sep 9 00:06:47.354855 containerd[1508]: time="2025-09-09T00:06:47.354847370Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:06:47.355059 containerd[1508]: time="2025-09-09T00:06:47.354760033Z" level=info msg="shim disconnected" id=c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5 namespace=k8s.io Sep 9 00:06:47.355059 containerd[1508]: time="2025-09-09T00:06:47.354951219Z" level=warning msg="cleaning up after shim disconnected" id=c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5 namespace=k8s.io Sep 9 00:06:47.355059 containerd[1508]: time="2025-09-09T00:06:47.354959835Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:06:47.591767 containerd[1508]: time="2025-09-09T00:06:47.591633591Z" level=info msg="StopContainer for \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\" returns successfully" Sep 9 00:06:47.592448 containerd[1508]: time="2025-09-09T00:06:47.592418916Z" level=info msg="StopPodSandbox for \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\"" Sep 9 00:06:47.592492 containerd[1508]: time="2025-09-09T00:06:47.592456908Z" level=info msg="Container to stop \"904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:06:47.592518 containerd[1508]: time="2025-09-09T00:06:47.592492527Z" level=info msg="Container to stop \"cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:06:47.592518 containerd[1508]: time="2025-09-09T00:06:47.592500732Z" level=info msg="Container to stop \"260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:06:47.592518 containerd[1508]: time="2025-09-09T00:06:47.592508988Z" level=info msg="Container to stop \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:06:47.592585 containerd[1508]: time="2025-09-09T00:06:47.592517815Z" level=info msg="Container to stop \"75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:06:47.594972 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71-shm.mount: Deactivated successfully. Sep 9 00:06:47.599107 systemd[1]: cri-containerd-d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71.scope: Deactivated successfully. Sep 9 00:06:47.720765 containerd[1508]: time="2025-09-09T00:06:47.720704040Z" level=info msg="StopContainer for \"c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5\" returns successfully" Sep 9 00:06:47.721340 containerd[1508]: time="2025-09-09T00:06:47.721310722Z" level=info msg="StopPodSandbox for \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\"" Sep 9 00:06:47.721403 containerd[1508]: time="2025-09-09T00:06:47.721358434Z" level=info msg="Container to stop \"c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:06:47.729048 systemd[1]: cri-containerd-a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a.scope: Deactivated successfully. Sep 9 00:06:47.775148 containerd[1508]: time="2025-09-09T00:06:47.775040788Z" level=info msg="shim disconnected" id=d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71 namespace=k8s.io Sep 9 00:06:47.775148 containerd[1508]: time="2025-09-09T00:06:47.775121804Z" level=warning msg="cleaning up after shim disconnected" id=d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71 namespace=k8s.io Sep 9 00:06:47.775148 containerd[1508]: time="2025-09-09T00:06:47.775134197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:06:47.791206 containerd[1508]: time="2025-09-09T00:06:47.791166249Z" level=info msg="TearDown network for sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" successfully" Sep 9 00:06:47.791206 containerd[1508]: time="2025-09-09T00:06:47.791198510Z" level=info msg="StopPodSandbox for \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" returns successfully" Sep 9 00:06:47.795546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a-rootfs.mount: Deactivated successfully. Sep 9 00:06:47.795679 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a-shm.mount: Deactivated successfully. Sep 9 00:06:47.795793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71-rootfs.mount: Deactivated successfully. Sep 9 00:06:47.882404 kubelet[2712]: I0909 00:06:47.882280 2712 scope.go:117] "RemoveContainer" containerID="a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56" Sep 9 00:06:47.883515 containerd[1508]: time="2025-09-09T00:06:47.883451672Z" level=info msg="RemoveContainer for \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\"" Sep 9 00:06:47.895586 containerd[1508]: time="2025-09-09T00:06:47.895521984Z" level=info msg="shim disconnected" id=a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a namespace=k8s.io Sep 9 00:06:47.895586 containerd[1508]: time="2025-09-09T00:06:47.895566390Z" level=warning msg="cleaning up after shim disconnected" id=a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a namespace=k8s.io Sep 9 00:06:47.895586 containerd[1508]: time="2025-09-09T00:06:47.895575046Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:06:47.910701 containerd[1508]: time="2025-09-09T00:06:47.910632169Z" level=info msg="TearDown network for sandbox \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\" successfully" Sep 9 00:06:47.910701 containerd[1508]: time="2025-09-09T00:06:47.910685111Z" level=info msg="StopPodSandbox for \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\" returns successfully" Sep 9 00:06:47.962731 kubelet[2712]: I0909 00:06:47.962668 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-cilium-run\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.962731 kubelet[2712]: I0909 00:06:47.962708 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-lib-modules\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.962731 kubelet[2712]: I0909 00:06:47.962727 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-bpf-maps\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.962731 kubelet[2712]: I0909 00:06:47.962741 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-cni-path\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.962731 kubelet[2712]: I0909 00:06:47.962734 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962763 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k92cz\" (UniqueName: \"kubernetes.io/projected/ff4f784b-22ec-4a63-97ec-f1e96a529319-kube-api-access-k92cz\") pod \"ff4f784b-22ec-4a63-97ec-f1e96a529319\" (UID: \"ff4f784b-22ec-4a63-97ec-f1e96a529319\") " Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962781 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962783 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-xtables-lock\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962822 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-host-proc-sys-kernel\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962845 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff4f784b-22ec-4a63-97ec-f1e96a529319-cilium-config-path\") pod \"ff4f784b-22ec-4a63-97ec-f1e96a529319\" (UID: \"ff4f784b-22ec-4a63-97ec-f1e96a529319\") " Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962863 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-hostproc\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962821 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962881 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-cilium-cgroup\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962822 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-cni-path" (OuterVolumeSpecName: "cni-path") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962849 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962901 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d032543-b3bf-4890-8439-c9581477f52f-clustermesh-secrets\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962917 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-etc-cni-netd\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962933 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d032543-b3bf-4890-8439-c9581477f52f-hubble-tls\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962950 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-host-proc-sys-net\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962965 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d032543-b3bf-4890-8439-c9581477f52f-cilium-config-path\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.962980 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjfjs\" (UniqueName: \"kubernetes.io/projected/2d032543-b3bf-4890-8439-c9581477f52f-kube-api-access-fjfjs\") pod \"2d032543-b3bf-4890-8439-c9581477f52f\" (UID: \"2d032543-b3bf-4890-8439-c9581477f52f\") " Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.963008 2712 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.963020 2712 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:47.963040 kubelet[2712]: I0909 00:06:47.963033 2712 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:47.963499 kubelet[2712]: I0909 00:06:47.963042 2712 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:47.963499 kubelet[2712]: I0909 00:06:47.963051 2712 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:47.966972 kubelet[2712]: I0909 00:06:47.962866 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:06:47.966972 kubelet[2712]: I0909 00:06:47.962914 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-hostproc" (OuterVolumeSpecName: "hostproc") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:06:47.966972 kubelet[2712]: I0909 00:06:47.963211 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:06:47.966972 kubelet[2712]: I0909 00:06:47.963226 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:06:47.966972 kubelet[2712]: I0909 00:06:47.966522 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:06:47.968226 systemd[1]: var-lib-kubelet-pods-2d032543\x2db3bf\x2d4890\x2d8439\x2dc9581477f52f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:06:47.969734 kubelet[2712]: I0909 00:06:47.968936 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4f784b-22ec-4a63-97ec-f1e96a529319-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ff4f784b-22ec-4a63-97ec-f1e96a529319" (UID: "ff4f784b-22ec-4a63-97ec-f1e96a529319"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:06:47.969734 kubelet[2712]: I0909 00:06:47.968983 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d032543-b3bf-4890-8439-c9581477f52f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:06:47.969734 kubelet[2712]: I0909 00:06:47.969445 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d032543-b3bf-4890-8439-c9581477f52f-kube-api-access-fjfjs" (OuterVolumeSpecName: "kube-api-access-fjfjs") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "kube-api-access-fjfjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:06:47.969734 kubelet[2712]: I0909 00:06:47.969625 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d032543-b3bf-4890-8439-c9581477f52f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:06:47.969734 kubelet[2712]: I0909 00:06:47.969721 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4f784b-22ec-4a63-97ec-f1e96a529319-kube-api-access-k92cz" (OuterVolumeSpecName: "kube-api-access-k92cz") pod "ff4f784b-22ec-4a63-97ec-f1e96a529319" (UID: "ff4f784b-22ec-4a63-97ec-f1e96a529319"). InnerVolumeSpecName "kube-api-access-k92cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:06:47.972084 systemd[1]: var-lib-kubelet-pods-ff4f784b\x2d22ec\x2d4a63\x2d97ec\x2df1e96a529319-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk92cz.mount: Deactivated successfully. Sep 9 00:06:47.972196 systemd[1]: var-lib-kubelet-pods-2d032543\x2db3bf\x2d4890\x2d8439\x2dc9581477f52f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfjfjs.mount: Deactivated successfully. Sep 9 00:06:47.972281 systemd[1]: var-lib-kubelet-pods-2d032543\x2db3bf\x2d4890\x2d8439\x2dc9581477f52f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:06:47.972334 kubelet[2712]: I0909 00:06:47.972278 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d032543-b3bf-4890-8439-c9581477f52f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2d032543-b3bf-4890-8439-c9581477f52f" (UID: "2d032543-b3bf-4890-8439-c9581477f52f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:06:47.991537 containerd[1508]: time="2025-09-09T00:06:47.991494100Z" level=info msg="RemoveContainer for \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\" returns successfully" Sep 9 00:06:47.991860 kubelet[2712]: I0909 00:06:47.991834 2712 scope.go:117] "RemoveContainer" containerID="260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f" Sep 9 00:06:47.992958 containerd[1508]: time="2025-09-09T00:06:47.992922477Z" level=info msg="RemoveContainer for \"260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f\"" Sep 9 00:06:48.063158 kubelet[2712]: I0909 00:06:48.063119 2712 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:48.063158 kubelet[2712]: I0909 00:06:48.063147 2712 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:48.063158 kubelet[2712]: I0909 00:06:48.063157 2712 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d032543-b3bf-4890-8439-c9581477f52f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:48.063311 kubelet[2712]: I0909 00:06:48.063166 2712 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:48.063311 kubelet[2712]: I0909 00:06:48.063176 2712 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:48.063311 kubelet[2712]: I0909 00:06:48.063185 2712 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d032543-b3bf-4890-8439-c9581477f52f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:48.063311 kubelet[2712]: I0909 00:06:48.063193 2712 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d032543-b3bf-4890-8439-c9581477f52f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:48.063311 kubelet[2712]: I0909 00:06:48.063203 2712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjfjs\" (UniqueName: \"kubernetes.io/projected/2d032543-b3bf-4890-8439-c9581477f52f-kube-api-access-fjfjs\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:48.063311 kubelet[2712]: I0909 00:06:48.063212 2712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k92cz\" (UniqueName: \"kubernetes.io/projected/ff4f784b-22ec-4a63-97ec-f1e96a529319-kube-api-access-k92cz\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:48.063311 kubelet[2712]: I0909 00:06:48.063220 2712 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d032543-b3bf-4890-8439-c9581477f52f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:48.063311 kubelet[2712]: I0909 00:06:48.063227 2712 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff4f784b-22ec-4a63-97ec-f1e96a529319-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:06:48.110011 containerd[1508]: time="2025-09-09T00:06:48.109981279Z" level=info msg="RemoveContainer for \"260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f\" returns successfully" Sep 9 00:06:48.110273 kubelet[2712]: I0909 00:06:48.110238 2712 scope.go:117] "RemoveContainer" containerID="75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e" Sep 9 00:06:48.111329 containerd[1508]: time="2025-09-09T00:06:48.111283082Z" level=info msg="RemoveContainer for \"75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e\"" Sep 9 00:06:48.121037 systemd[1]: Removed slice kubepods-besteffort-podff4f784b_22ec_4a63_97ec_f1e96a529319.slice - libcontainer container kubepods-besteffort-podff4f784b_22ec_4a63_97ec_f1e96a529319.slice. Sep 9 00:06:48.122561 systemd[1]: Removed slice kubepods-burstable-pod2d032543_b3bf_4890_8439_c9581477f52f.slice - libcontainer container kubepods-burstable-pod2d032543_b3bf_4890_8439_c9581477f52f.slice. Sep 9 00:06:48.122686 systemd[1]: kubepods-burstable-pod2d032543_b3bf_4890_8439_c9581477f52f.slice: Consumed 8.063s CPU time, 124.6M memory peak, 348K read from disk, 13.3M written to disk. Sep 9 00:06:48.198822 containerd[1508]: time="2025-09-09T00:06:48.198692447Z" level=info msg="RemoveContainer for \"75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e\" returns successfully" Sep 9 00:06:48.198934 kubelet[2712]: I0909 00:06:48.198866 2712 scope.go:117] "RemoveContainer" containerID="cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e" Sep 9 00:06:48.200213 containerd[1508]: time="2025-09-09T00:06:48.200166010Z" level=info msg="RemoveContainer for \"cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e\"" Sep 9 00:06:48.291612 containerd[1508]: time="2025-09-09T00:06:48.291545870Z" level=info msg="RemoveContainer for \"cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e\" returns successfully" Sep 9 00:06:48.291859 kubelet[2712]: I0909 00:06:48.291806 2712 scope.go:117] "RemoveContainer" containerID="904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd" Sep 9 00:06:48.292817 containerd[1508]: time="2025-09-09T00:06:48.292784201Z" level=info msg="RemoveContainer for \"904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd\"" Sep 9 00:06:48.380911 containerd[1508]: time="2025-09-09T00:06:48.380882326Z" level=info msg="RemoveContainer for \"904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd\" returns successfully" Sep 9 00:06:48.381062 kubelet[2712]: I0909 00:06:48.381041 2712 scope.go:117] "RemoveContainer" containerID="a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56" Sep 9 00:06:48.381269 containerd[1508]: time="2025-09-09T00:06:48.381227737Z" level=error msg="ContainerStatus for \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\": not found" Sep 9 00:06:48.381366 kubelet[2712]: E0909 00:06:48.381333 2712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\": not found" containerID="a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56" Sep 9 00:06:48.381459 kubelet[2712]: I0909 00:06:48.381362 2712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56"} err="failed to get container status \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\": rpc error: code = NotFound desc = an error occurred when try to find container \"a00a89d6bfc47efaa9d2ecc9328f324d17bebc2bd4d5ba9600b56a2add09ac56\": not found" Sep 9 00:06:48.381459 kubelet[2712]: I0909 00:06:48.381454 2712 scope.go:117] "RemoveContainer" containerID="260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f" Sep 9 00:06:48.381671 containerd[1508]: time="2025-09-09T00:06:48.381618355Z" level=error msg="ContainerStatus for \"260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f\": not found" Sep 9 00:06:48.381787 kubelet[2712]: E0909 00:06:48.381764 2712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f\": not found" containerID="260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f" Sep 9 00:06:48.381839 kubelet[2712]: I0909 00:06:48.381787 2712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f"} err="failed to get container status \"260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f\": rpc error: code = NotFound desc = an error occurred when try to find container \"260a554f04f7c83039c2968abc8008bb0fd878b74e8b961d03b12369c8ecc79f\": not found" Sep 9 00:06:48.381839 kubelet[2712]: I0909 00:06:48.381803 2712 scope.go:117] "RemoveContainer" containerID="75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e" Sep 9 00:06:48.381967 containerd[1508]: time="2025-09-09T00:06:48.381938999Z" level=error msg="ContainerStatus for \"75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e\": not found" Sep 9 00:06:48.382054 kubelet[2712]: E0909 00:06:48.382038 2712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e\": not found" containerID="75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e" Sep 9 00:06:48.382099 kubelet[2712]: I0909 00:06:48.382055 2712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e"} err="failed to get container status \"75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e\": rpc error: code = NotFound desc = an error occurred when try to find container \"75f59f7a0c8467c2d2c5bd2f458034cc91f252be51ba191fc531f5d14601668e\": not found" Sep 9 00:06:48.382099 kubelet[2712]: I0909 00:06:48.382067 2712 scope.go:117] "RemoveContainer" containerID="cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e" Sep 9 00:06:48.382218 containerd[1508]: time="2025-09-09T00:06:48.382191242Z" level=error msg="ContainerStatus for \"cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e\": not found" Sep 9 00:06:48.382332 kubelet[2712]: E0909 00:06:48.382309 2712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e\": not found" containerID="cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e" Sep 9 00:06:48.382402 kubelet[2712]: I0909 00:06:48.382333 2712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e"} err="failed to get container status \"cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf52ab59f72b827776749d3c9965459a97fc2a9a398104224b2494f59ccac07e\": not found" Sep 9 00:06:48.382402 kubelet[2712]: I0909 00:06:48.382349 2712 scope.go:117] "RemoveContainer" containerID="904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd" Sep 9 00:06:48.382556 containerd[1508]: time="2025-09-09T00:06:48.382527807Z" level=error msg="ContainerStatus for \"904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd\": not found" Sep 9 00:06:48.382649 kubelet[2712]: E0909 00:06:48.382628 2712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd\": not found" containerID="904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd" Sep 9 00:06:48.382702 kubelet[2712]: I0909 00:06:48.382664 2712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd"} err="failed to get container status \"904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd\": rpc error: code = NotFound desc = an error occurred when try to find container \"904d44f14162bdad0d3212d1048d6c4b08f001b9f773cff4f7ca04895c25bddd\": not found" Sep 9 00:06:48.411657 sshd[4720]: Connection closed by 10.0.0.1 port 59278 Sep 9 00:06:48.412132 sshd-session[4717]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:48.428884 systemd[1]: sshd@41-10.0.0.143:22-10.0.0.1:59278.service: Deactivated successfully. Sep 9 00:06:48.431147 systemd[1]: session-42.scope: Deactivated successfully. Sep 9 00:06:48.432650 systemd-logind[1496]: Session 42 logged out. Waiting for processes to exit. Sep 9 00:06:48.440897 systemd[1]: Started sshd@42-10.0.0.143:22-10.0.0.1:59290.service - OpenSSH per-connection server daemon (10.0.0.1:59290). Sep 9 00:06:48.441772 systemd-logind[1496]: Removed session 42. Sep 9 00:06:48.483012 sshd[4882]: Accepted publickey for core from 10.0.0.1 port 59290 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:48.484441 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:48.488790 systemd-logind[1496]: New session 43 of user core. Sep 9 00:06:48.508776 systemd[1]: Started session-43.scope - Session 43 of User core. Sep 9 00:06:48.884982 kubelet[2712]: I0909 00:06:48.884863 2712 scope.go:117] "RemoveContainer" containerID="c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5" Sep 9 00:06:48.888343 containerd[1508]: time="2025-09-09T00:06:48.888300526Z" level=info msg="RemoveContainer for \"c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5\"" Sep 9 00:06:48.995071 containerd[1508]: time="2025-09-09T00:06:48.995020534Z" level=info msg="RemoveContainer for \"c86df02fea100818d81ca67ac6048638707fe5ad9ed7cf77c9cfab0c68b9b4e5\" returns successfully" Sep 9 00:06:49.113093 kubelet[2712]: E0909 00:06:49.113048 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:06:50.098956 sshd[4885]: Connection closed by 10.0.0.1 port 59290 Sep 9 00:06:50.099337 sshd-session[4882]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:50.111623 systemd[1]: sshd@42-10.0.0.143:22-10.0.0.1:59290.service: Deactivated successfully. Sep 9 00:06:50.113893 systemd[1]: session-43.scope: Deactivated successfully. Sep 9 00:06:50.114364 kubelet[2712]: I0909 00:06:50.114333 2712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d032543-b3bf-4890-8439-c9581477f52f" path="/var/lib/kubelet/pods/2d032543-b3bf-4890-8439-c9581477f52f/volumes" Sep 9 00:06:50.115368 kubelet[2712]: I0909 00:06:50.115328 2712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff4f784b-22ec-4a63-97ec-f1e96a529319" path="/var/lib/kubelet/pods/ff4f784b-22ec-4a63-97ec-f1e96a529319/volumes" Sep 9 00:06:50.115805 systemd-logind[1496]: Session 43 logged out. Waiting for processes to exit. Sep 9 00:06:50.124910 systemd[1]: Started sshd@43-10.0.0.143:22-10.0.0.1:39632.service - OpenSSH per-connection server daemon (10.0.0.1:39632). Sep 9 00:06:50.125903 systemd-logind[1496]: Removed session 43. Sep 9 00:06:50.163085 sshd[4896]: Accepted publickey for core from 10.0.0.1 port 39632 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:50.164454 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:50.168804 systemd-logind[1496]: New session 44 of user core. Sep 9 00:06:50.184838 systemd[1]: Started session-44.scope - Session 44 of User core. Sep 9 00:06:50.235703 sshd[4899]: Connection closed by 10.0.0.1 port 39632 Sep 9 00:06:50.236041 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Sep 9 00:06:50.252720 systemd[1]: sshd@43-10.0.0.143:22-10.0.0.1:39632.service: Deactivated successfully. Sep 9 00:06:50.255029 systemd[1]: session-44.scope: Deactivated successfully. Sep 9 00:06:50.256567 systemd-logind[1496]: Session 44 logged out. Waiting for processes to exit. Sep 9 00:06:50.264881 systemd[1]: Started sshd@44-10.0.0.143:22-10.0.0.1:39638.service - OpenSSH per-connection server daemon (10.0.0.1:39638). Sep 9 00:06:50.265704 systemd-logind[1496]: Removed session 44. Sep 9 00:06:50.304009 sshd[4905]: Accepted publickey for core from 10.0.0.1 port 39638 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 9 00:06:50.305335 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:06:50.309763 systemd-logind[1496]: New session 45 of user core. Sep 9 00:06:50.319777 systemd[1]: Started session-45.scope - Session 45 of User core. Sep 9 00:06:50.781180 kubelet[2712]: E0909 00:06:50.781130 2712 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d032543-b3bf-4890-8439-c9581477f52f" containerName="mount-cgroup" Sep 9 00:06:50.781180 kubelet[2712]: E0909 00:06:50.781163 2712 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d032543-b3bf-4890-8439-c9581477f52f" containerName="mount-bpf-fs" Sep 9 00:06:50.781180 kubelet[2712]: E0909 00:06:50.781184 2712 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d032543-b3bf-4890-8439-c9581477f52f" containerName="clean-cilium-state" Sep 9 00:06:50.781180 kubelet[2712]: E0909 00:06:50.781191 2712 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d032543-b3bf-4890-8439-c9581477f52f" containerName="cilium-agent" Sep 9 00:06:50.781180 kubelet[2712]: E0909 00:06:50.781197 2712 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff4f784b-22ec-4a63-97ec-f1e96a529319" containerName="cilium-operator" Sep 9 00:06:50.781449 kubelet[2712]: E0909 00:06:50.781205 2712 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d032543-b3bf-4890-8439-c9581477f52f" containerName="apply-sysctl-overwrites" Sep 9 00:06:50.781449 kubelet[2712]: I0909 00:06:50.781233 2712 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d032543-b3bf-4890-8439-c9581477f52f" containerName="cilium-agent" Sep 9 00:06:50.781449 kubelet[2712]: I0909 00:06:50.781239 2712 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4f784b-22ec-4a63-97ec-f1e96a529319" containerName="cilium-operator" Sep 9 00:06:50.792340 systemd[1]: Created slice kubepods-burstable-pode57e6cf6_1c0c_48ba_9940_cc2b6e4f95dd.slice - libcontainer container kubepods-burstable-pode57e6cf6_1c0c_48ba_9940_cc2b6e4f95dd.slice. Sep 9 00:06:50.977037 kubelet[2712]: I0909 00:06:50.976994 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bw8w\" (UniqueName: \"kubernetes.io/projected/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-kube-api-access-7bw8w\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977037 kubelet[2712]: I0909 00:06:50.977028 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-cilium-cgroup\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977037 kubelet[2712]: I0909 00:06:50.977049 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-hubble-tls\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977226 kubelet[2712]: I0909 00:06:50.977065 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-etc-cni-netd\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977226 kubelet[2712]: I0909 00:06:50.977087 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-cni-path\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977226 kubelet[2712]: I0909 00:06:50.977101 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-xtables-lock\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977226 kubelet[2712]: I0909 00:06:50.977114 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-cilium-ipsec-secrets\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977226 kubelet[2712]: I0909 00:06:50.977175 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-cilium-config-path\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977226 kubelet[2712]: I0909 00:06:50.977221 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-clustermesh-secrets\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977366 kubelet[2712]: I0909 00:06:50.977246 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-hostproc\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977366 kubelet[2712]: I0909 00:06:50.977261 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-lib-modules\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977366 kubelet[2712]: I0909 00:06:50.977274 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-host-proc-sys-net\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977366 kubelet[2712]: I0909 00:06:50.977288 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-bpf-maps\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977366 kubelet[2712]: I0909 00:06:50.977301 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-host-proc-sys-kernel\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:50.977366 kubelet[2712]: I0909 00:06:50.977317 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd-cilium-run\") pod \"cilium-qqcfn\" (UID: \"e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd\") " pod="kube-system/cilium-qqcfn" Sep 9 00:06:51.395609 kubelet[2712]: E0909 00:06:51.395568 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:06:51.396201 containerd[1508]: time="2025-09-09T00:06:51.396167195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qqcfn,Uid:e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd,Namespace:kube-system,Attempt:0,}" Sep 9 00:06:51.733221 containerd[1508]: time="2025-09-09T00:06:51.732938485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:06:51.733221 containerd[1508]: time="2025-09-09T00:06:51.732996225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:06:51.733221 containerd[1508]: time="2025-09-09T00:06:51.733010923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:06:51.733221 containerd[1508]: time="2025-09-09T00:06:51.733096247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:06:51.753790 systemd[1]: Started cri-containerd-dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26.scope - libcontainer container dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26. Sep 9 00:06:51.777005 containerd[1508]: time="2025-09-09T00:06:51.776964066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qqcfn,Uid:e57e6cf6-1c0c-48ba-9940-cc2b6e4f95dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26\"" Sep 9 00:06:51.778351 kubelet[2712]: E0909 00:06:51.778001 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:06:51.779710 containerd[1508]: time="2025-09-09T00:06:51.779686698Z" level=info msg="CreateContainer within sandbox \"dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:06:52.179250 kubelet[2712]: E0909 00:06:52.179110 2712 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:06:52.398073 containerd[1508]: time="2025-09-09T00:06:52.398019271Z" level=info msg="CreateContainer within sandbox \"dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9128952730261e4492fc65153e7715218fd90920dfd4a5dd16877195dd43c53a\"" Sep 9 00:06:52.398530 containerd[1508]: time="2025-09-09T00:06:52.398507464Z" level=info msg="StartContainer for \"9128952730261e4492fc65153e7715218fd90920dfd4a5dd16877195dd43c53a\"" Sep 9 00:06:52.429794 systemd[1]: Started cri-containerd-9128952730261e4492fc65153e7715218fd90920dfd4a5dd16877195dd43c53a.scope - libcontainer container 9128952730261e4492fc65153e7715218fd90920dfd4a5dd16877195dd43c53a. Sep 9 00:06:52.498206 systemd[1]: cri-containerd-9128952730261e4492fc65153e7715218fd90920dfd4a5dd16877195dd43c53a.scope: Deactivated successfully. Sep 9 00:06:52.544601 containerd[1508]: time="2025-09-09T00:06:52.544522559Z" level=info msg="StartContainer for \"9128952730261e4492fc65153e7715218fd90920dfd4a5dd16877195dd43c53a\" returns successfully" Sep 9 00:06:52.567824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9128952730261e4492fc65153e7715218fd90920dfd4a5dd16877195dd43c53a-rootfs.mount: Deactivated successfully. Sep 9 00:06:52.871673 containerd[1508]: time="2025-09-09T00:06:52.871580007Z" level=info msg="shim disconnected" id=9128952730261e4492fc65153e7715218fd90920dfd4a5dd16877195dd43c53a namespace=k8s.io Sep 9 00:06:52.871673 containerd[1508]: time="2025-09-09T00:06:52.871639471Z" level=warning msg="cleaning up after shim disconnected" id=9128952730261e4492fc65153e7715218fd90920dfd4a5dd16877195dd43c53a namespace=k8s.io Sep 9 00:06:52.871673 containerd[1508]: time="2025-09-09T00:06:52.871674268Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:06:52.934894 kubelet[2712]: E0909 00:06:52.934849 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:06:52.936833 containerd[1508]: time="2025-09-09T00:06:52.936735184Z" level=info msg="CreateContainer within sandbox \"dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:06:53.206624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2746213552.mount: Deactivated successfully. Sep 9 00:06:53.293268 containerd[1508]: time="2025-09-09T00:06:53.291989454Z" level=info msg="CreateContainer within sandbox \"dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c05e41879395bb7e250d9614db8cd75eb5cc37c27e1d97f50c0e4ab99f01f5d6\"" Sep 9 00:06:53.293268 containerd[1508]: time="2025-09-09T00:06:53.292935214Z" level=info msg="StartContainer for \"c05e41879395bb7e250d9614db8cd75eb5cc37c27e1d97f50c0e4ab99f01f5d6\"" Sep 9 00:06:53.363055 systemd[1]: Started cri-containerd-c05e41879395bb7e250d9614db8cd75eb5cc37c27e1d97f50c0e4ab99f01f5d6.scope - libcontainer container c05e41879395bb7e250d9614db8cd75eb5cc37c27e1d97f50c0e4ab99f01f5d6. Sep 9 00:06:53.439911 systemd[1]: cri-containerd-c05e41879395bb7e250d9614db8cd75eb5cc37c27e1d97f50c0e4ab99f01f5d6.scope: Deactivated successfully. Sep 9 00:06:53.508267 containerd[1508]: time="2025-09-09T00:06:53.508204605Z" level=info msg="StartContainer for \"c05e41879395bb7e250d9614db8cd75eb5cc37c27e1d97f50c0e4ab99f01f5d6\" returns successfully" Sep 9 00:06:53.532515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c05e41879395bb7e250d9614db8cd75eb5cc37c27e1d97f50c0e4ab99f01f5d6-rootfs.mount: Deactivated successfully. Sep 9 00:06:53.681488 containerd[1508]: time="2025-09-09T00:06:53.678577612Z" level=info msg="shim disconnected" id=c05e41879395bb7e250d9614db8cd75eb5cc37c27e1d97f50c0e4ab99f01f5d6 namespace=k8s.io Sep 9 00:06:53.681488 containerd[1508]: time="2025-09-09T00:06:53.678670911Z" level=warning msg="cleaning up after shim disconnected" id=c05e41879395bb7e250d9614db8cd75eb5cc37c27e1d97f50c0e4ab99f01f5d6 namespace=k8s.io Sep 9 00:06:53.681488 containerd[1508]: time="2025-09-09T00:06:53.678683886Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:06:53.950740 kubelet[2712]: E0909 00:06:53.946734 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:06:53.962925 containerd[1508]: time="2025-09-09T00:06:53.962856428Z" level=info msg="CreateContainer within sandbox \"dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:06:55.199746 containerd[1508]: time="2025-09-09T00:06:55.199669114Z" level=info msg="CreateContainer within sandbox \"dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"82ec6fa0e191a04fc8abe8bee458edae6dba20a2605e05378bd83b174e8ab37b\"" Sep 9 00:06:55.200469 containerd[1508]: time="2025-09-09T00:06:55.200397006Z" level=info msg="StartContainer for \"82ec6fa0e191a04fc8abe8bee458edae6dba20a2605e05378bd83b174e8ab37b\"" Sep 9 00:06:55.239797 systemd[1]: Started cri-containerd-82ec6fa0e191a04fc8abe8bee458edae6dba20a2605e05378bd83b174e8ab37b.scope - libcontainer container 82ec6fa0e191a04fc8abe8bee458edae6dba20a2605e05378bd83b174e8ab37b. Sep 9 00:06:55.366250 systemd[1]: cri-containerd-82ec6fa0e191a04fc8abe8bee458edae6dba20a2605e05378bd83b174e8ab37b.scope: Deactivated successfully. Sep 9 00:06:55.438616 containerd[1508]: time="2025-09-09T00:06:55.438553709Z" level=info msg="StartContainer for \"82ec6fa0e191a04fc8abe8bee458edae6dba20a2605e05378bd83b174e8ab37b\" returns successfully" Sep 9 00:06:55.460435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82ec6fa0e191a04fc8abe8bee458edae6dba20a2605e05378bd83b174e8ab37b-rootfs.mount: Deactivated successfully. Sep 9 00:06:55.764852 kubelet[2712]: I0909 00:06:55.764792 2712 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T00:06:55Z","lastTransitionTime":"2025-09-09T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 00:06:55.813260 containerd[1508]: time="2025-09-09T00:06:55.813192097Z" level=info msg="shim disconnected" id=82ec6fa0e191a04fc8abe8bee458edae6dba20a2605e05378bd83b174e8ab37b namespace=k8s.io Sep 9 00:06:55.813260 containerd[1508]: time="2025-09-09T00:06:55.813250168Z" level=warning msg="cleaning up after shim disconnected" id=82ec6fa0e191a04fc8abe8bee458edae6dba20a2605e05378bd83b174e8ab37b namespace=k8s.io Sep 9 00:06:55.813260 containerd[1508]: time="2025-09-09T00:06:55.813259415Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:06:56.031510 kubelet[2712]: E0909 00:06:56.031360 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:06:56.033784 containerd[1508]: time="2025-09-09T00:06:56.033736195Z" level=info msg="CreateContainer within sandbox \"dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:06:56.675280 containerd[1508]: time="2025-09-09T00:06:56.675235307Z" level=info msg="CreateContainer within sandbox \"dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5c8d7cb78a6297872f6cf9662b98c6f043d63c0d9573574a48167e447c7f2342\"" Sep 9 00:06:56.675846 containerd[1508]: time="2025-09-09T00:06:56.675797361Z" level=info msg="StartContainer for \"5c8d7cb78a6297872f6cf9662b98c6f043d63c0d9573574a48167e447c7f2342\"" Sep 9 00:06:56.705773 systemd[1]: Started cri-containerd-5c8d7cb78a6297872f6cf9662b98c6f043d63c0d9573574a48167e447c7f2342.scope - libcontainer container 5c8d7cb78a6297872f6cf9662b98c6f043d63c0d9573574a48167e447c7f2342. Sep 9 00:06:56.727757 systemd[1]: cri-containerd-5c8d7cb78a6297872f6cf9662b98c6f043d63c0d9573574a48167e447c7f2342.scope: Deactivated successfully. Sep 9 00:06:56.853088 containerd[1508]: time="2025-09-09T00:06:56.853020580Z" level=info msg="StartContainer for \"5c8d7cb78a6297872f6cf9662b98c6f043d63c0d9573574a48167e447c7f2342\" returns successfully" Sep 9 00:06:56.872919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c8d7cb78a6297872f6cf9662b98c6f043d63c0d9573574a48167e447c7f2342-rootfs.mount: Deactivated successfully. Sep 9 00:06:57.036059 kubelet[2712]: E0909 00:06:57.036023 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:06:57.076774 containerd[1508]: time="2025-09-09T00:06:57.076674232Z" level=info msg="shim disconnected" id=5c8d7cb78a6297872f6cf9662b98c6f043d63c0d9573574a48167e447c7f2342 namespace=k8s.io Sep 9 00:06:57.076774 containerd[1508]: time="2025-09-09T00:06:57.076738043Z" level=warning msg="cleaning up after shim disconnected" id=5c8d7cb78a6297872f6cf9662b98c6f043d63c0d9573574a48167e447c7f2342 namespace=k8s.io Sep 9 00:06:57.076774 containerd[1508]: time="2025-09-09T00:06:57.076746840Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:06:57.180130 kubelet[2712]: E0909 00:06:57.180093 2712 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:06:58.039456 kubelet[2712]: E0909 00:06:58.039421 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:06:58.041432 containerd[1508]: time="2025-09-09T00:06:58.041395203Z" level=info msg="CreateContainer within sandbox \"dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:06:58.321910 containerd[1508]: time="2025-09-09T00:06:58.321785620Z" level=info msg="CreateContainer within sandbox \"dff5002e7ad692ff3dcdedd5fca5c97dbe20dd35c40d21861c085d3aa7c8ea26\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5cf5be5da00e225b6b0f94ed1f589083f1c7734ab0132ebd9592a7010f7909ac\"" Sep 9 00:06:58.323498 containerd[1508]: time="2025-09-09T00:06:58.322429792Z" level=info msg="StartContainer for \"5cf5be5da00e225b6b0f94ed1f589083f1c7734ab0132ebd9592a7010f7909ac\"" Sep 9 00:06:58.352787 systemd[1]: Started cri-containerd-5cf5be5da00e225b6b0f94ed1f589083f1c7734ab0132ebd9592a7010f7909ac.scope - libcontainer container 5cf5be5da00e225b6b0f94ed1f589083f1c7734ab0132ebd9592a7010f7909ac. Sep 9 00:06:58.573094 containerd[1508]: time="2025-09-09T00:06:58.572932115Z" level=info msg="StartContainer for \"5cf5be5da00e225b6b0f94ed1f589083f1c7734ab0132ebd9592a7010f7909ac\" returns successfully" Sep 9 00:06:58.919693 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 9 00:06:59.044464 kubelet[2712]: E0909 00:06:59.044424 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:06:59.194118 kubelet[2712]: I0909 00:06:59.193952 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qqcfn" podStartSLOduration=9.193928216 podStartE2EDuration="9.193928216s" podCreationTimestamp="2025-09-09 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:06:59.193434974 +0000 UTC m=+177.163814733" watchObservedRunningTime="2025-09-09 00:06:59.193928216 +0000 UTC m=+177.164307966" Sep 9 00:07:00.045701 kubelet[2712]: E0909 00:07:00.045665 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:07:00.258736 systemd[1]: run-containerd-runc-k8s.io-5cf5be5da00e225b6b0f94ed1f589083f1c7734ab0132ebd9592a7010f7909ac-runc.Gy6Yz6.mount: Deactivated successfully. Sep 9 00:07:01.047448 kubelet[2712]: E0909 00:07:01.047229 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:07:02.165195 containerd[1508]: time="2025-09-09T00:07:02.164442983Z" level=info msg="StopPodSandbox for \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\"" Sep 9 00:07:02.165195 containerd[1508]: time="2025-09-09T00:07:02.164562100Z" level=info msg="TearDown network for sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" successfully" Sep 9 00:07:02.165195 containerd[1508]: time="2025-09-09T00:07:02.164572680Z" level=info msg="StopPodSandbox for \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" returns successfully" Sep 9 00:07:02.165134 systemd-networkd[1436]: lxc_health: Link UP Sep 9 00:07:02.167263 containerd[1508]: time="2025-09-09T00:07:02.167070591Z" level=info msg="RemovePodSandbox for \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\"" Sep 9 00:07:02.167263 containerd[1508]: time="2025-09-09T00:07:02.167098935Z" level=info msg="Forcibly stopping sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\"" Sep 9 00:07:02.167263 containerd[1508]: time="2025-09-09T00:07:02.167149962Z" level=info msg="TearDown network for sandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" successfully" Sep 9 00:07:02.168110 systemd-networkd[1436]: lxc_health: Gained carrier Sep 9 00:07:02.572353 containerd[1508]: time="2025-09-09T00:07:02.572223130Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:07:02.572353 containerd[1508]: time="2025-09-09T00:07:02.572297612Z" level=info msg="RemovePodSandbox \"d890118b873026eab3b6bcf12c11b278ef912d6b8968c0b9d79be0a2911eba71\" returns successfully" Sep 9 00:07:02.573576 containerd[1508]: time="2025-09-09T00:07:02.573548722Z" level=info msg="StopPodSandbox for \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\"" Sep 9 00:07:02.573765 containerd[1508]: time="2025-09-09T00:07:02.573746089Z" level=info msg="TearDown network for sandbox \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\" successfully" Sep 9 00:07:02.574150 containerd[1508]: time="2025-09-09T00:07:02.573952763Z" level=info msg="StopPodSandbox for \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\" returns successfully" Sep 9 00:07:02.575840 containerd[1508]: time="2025-09-09T00:07:02.574244971Z" level=info msg="RemovePodSandbox for \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\"" Sep 9 00:07:02.575840 containerd[1508]: time="2025-09-09T00:07:02.574262494Z" level=info msg="Forcibly stopping sandbox \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\"" Sep 9 00:07:02.575840 containerd[1508]: time="2025-09-09T00:07:02.574329914Z" level=info msg="TearDown network for sandbox \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\" successfully" Sep 9 00:07:02.581566 containerd[1508]: time="2025-09-09T00:07:02.581243107Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:07:02.581726 containerd[1508]: time="2025-09-09T00:07:02.581708416Z" level=info msg="RemovePodSandbox \"a868bf221ee8a43195461d3d30985e586db00bf1a66edb57930fd21837120a4a\" returns successfully" Sep 9 00:07:02.663336 kubelet[2712]: E0909 00:07:02.663279 2712 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43858->127.0.0.1:33507: write tcp 127.0.0.1:43858->127.0.0.1:33507: write: broken pipe Sep 9 00:07:03.397142 kubelet[2712]: E0909 00:07:03.397098 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:07:03.874838 systemd-networkd[1436]: lxc_health: Gained IPv6LL Sep 9 00:07:04.053243 kubelet[2712]: E0909 00:07:04.053192 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:07:04.113974 kubelet[2712]: E0909 00:07:04.113245 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:07:05.055794 kubelet[2712]: E0909 00:07:05.055748 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:07:13.147059 sshd[4908]: Connection closed by 10.0.0.1 port 39638 Sep 9 00:07:13.147655 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Sep 9 00:07:13.151508 systemd[1]: sshd@44-10.0.0.143:22-10.0.0.1:39638.service: Deactivated successfully. Sep 9 00:07:13.153855 systemd[1]: session-45.scope: Deactivated successfully. Sep 9 00:07:13.154523 systemd-logind[1496]: Session 45 logged out. Waiting for processes to exit. Sep 9 00:07:13.155385 systemd-logind[1496]: Removed session 45. Sep 9 00:07:14.113111 kubelet[2712]: E0909 00:07:14.113065 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"