Sep 13 00:03:12.920170 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:03:12.920197 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:03:12.920215 kernel: BIOS-provided physical RAM map: Sep 13 00:03:12.920224 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:03:12.920232 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:03:12.920241 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:03:12.920251 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 00:03:12.920260 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 00:03:12.920268 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:03:12.920282 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:03:12.920291 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:03:12.920300 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:03:12.920311 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:03:12.920320 kernel: NX (Execute Disable) protection: active Sep 13 00:03:12.920330 kernel: APIC: Static calls initialized Sep 13 00:03:12.920349 kernel: SMBIOS 2.8 present. Sep 13 00:03:12.920358 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 00:03:12.920367 kernel: Hypervisor detected: KVM Sep 13 00:03:12.920376 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:03:12.920385 kernel: kvm-clock: using sched offset of 3152491446 cycles Sep 13 00:03:12.920395 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:03:12.920404 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:03:12.920414 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:03:12.920424 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:03:12.920439 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 00:03:12.920448 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:03:12.920457 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:03:12.920466 kernel: Using GB pages for direct mapping Sep 13 00:03:12.920476 kernel: ACPI: Early table checksum verification disabled Sep 13 00:03:12.920485 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 00:03:12.920494 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:03:12.920504 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:03:12.920514 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:03:12.920533 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 00:03:12.920544 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:03:12.920556 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:03:12.920567 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:03:12.920579 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:03:12.920591 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 00:03:12.920620 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 00:03:12.920642 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 00:03:12.920659 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 00:03:12.920669 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 00:03:12.920679 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 00:03:12.920688 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 00:03:12.920700 kernel: No NUMA configuration found Sep 13 00:03:12.920710 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 00:03:12.920725 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 13 00:03:12.920735 kernel: Zone ranges: Sep 13 00:03:12.920754 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:03:12.920764 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 00:03:12.920774 kernel: Normal empty Sep 13 00:03:12.920783 kernel: Movable zone start for each node Sep 13 00:03:12.920793 kernel: Early memory node ranges Sep 13 00:03:12.920803 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:03:12.920812 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 00:03:12.920823 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 00:03:12.920840 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:03:12.920852 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:03:12.920862 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:03:12.920877 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:03:12.920887 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:03:12.920907 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:03:12.920926 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:03:12.920945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:03:12.920968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:03:12.921000 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:03:12.921019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:03:12.921042 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:03:12.921055 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:03:12.921065 kernel: TSC deadline timer available Sep 13 00:03:12.921075 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:03:12.921085 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:03:12.921095 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:03:12.921109 kernel: kvm-guest: setup PV sched yield Sep 13 00:03:12.921125 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:03:12.921136 kernel: Booting paravirtualized kernel on KVM Sep 13 00:03:12.921146 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:03:12.921155 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:03:12.921165 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 13 00:03:12.921175 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 13 00:03:12.921185 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:03:12.921194 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:03:12.921204 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:03:12.921222 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:03:12.921233 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:03:12.921243 kernel: random: crng init done Sep 13 00:03:12.921253 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:03:12.921263 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:03:12.921273 kernel: Fallback order for Node 0: 0 Sep 13 00:03:12.921283 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 13 00:03:12.921302 kernel: Policy zone: DMA32 Sep 13 00:03:12.921318 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:03:12.921329 kernel: Memory: 2434588K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 136904K reserved, 0K cma-reserved) Sep 13 00:03:12.921339 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:03:12.921349 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:03:12.921358 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:03:12.921379 kernel: Dynamic Preempt: voluntary Sep 13 00:03:12.921389 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:03:12.921414 kernel: rcu: RCU event tracing is enabled. Sep 13 00:03:12.921425 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:03:12.921441 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:03:12.921452 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:03:12.921462 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:03:12.921472 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:03:12.921485 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:03:12.921495 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:03:12.921505 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:03:12.921515 kernel: Console: colour VGA+ 80x25 Sep 13 00:03:12.921525 kernel: printk: console [ttyS0] enabled Sep 13 00:03:12.921541 kernel: ACPI: Core revision 20230628 Sep 13 00:03:12.921552 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:03:12.921562 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:03:12.921572 kernel: x2apic enabled Sep 13 00:03:12.921582 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:03:12.921592 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 00:03:12.921653 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 00:03:12.921664 kernel: kvm-guest: setup PV IPIs Sep 13 00:03:12.921698 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:03:12.921709 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:03:12.921720 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:03:12.921730 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:03:12.921755 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:03:12.921766 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:03:12.921777 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:03:12.921787 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:03:12.921798 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:03:12.921815 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:03:12.921826 kernel: active return thunk: retbleed_return_thunk Sep 13 00:03:12.921839 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:03:12.921850 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:03:12.921861 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:03:12.921871 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 00:03:12.921883 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 00:03:12.921894 kernel: active return thunk: srso_return_thunk Sep 13 00:03:12.921910 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 00:03:12.921921 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:03:12.921932 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:03:12.921942 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:03:12.921953 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:03:12.921964 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:03:12.921974 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:03:12.921985 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:03:12.921996 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:03:12.922012 kernel: landlock: Up and running. Sep 13 00:03:12.922023 kernel: SELinux: Initializing. Sep 13 00:03:12.922034 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:03:12.922044 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:03:12.922055 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:03:12.922066 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:03:12.922077 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:03:12.922088 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:03:12.922106 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:03:12.922117 kernel: ... version: 0 Sep 13 00:03:12.922127 kernel: ... bit width: 48 Sep 13 00:03:12.922138 kernel: ... generic registers: 6 Sep 13 00:03:12.922149 kernel: ... value mask: 0000ffffffffffff Sep 13 00:03:12.922159 kernel: ... max period: 00007fffffffffff Sep 13 00:03:12.922170 kernel: ... fixed-purpose events: 0 Sep 13 00:03:12.922180 kernel: ... event mask: 000000000000003f Sep 13 00:03:12.922191 kernel: signal: max sigframe size: 1776 Sep 13 00:03:12.922202 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:03:12.922218 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:03:12.922229 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:03:12.922240 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:03:12.922251 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 00:03:12.922262 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:03:12.922272 kernel: smpboot: Max logical packages: 1 Sep 13 00:03:12.922283 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:03:12.922294 kernel: devtmpfs: initialized Sep 13 00:03:12.922304 kernel: x86/mm: Memory block size: 128MB Sep 13 00:03:12.922321 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:03:12.922331 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:03:12.922342 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:03:12.922353 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:03:12.922364 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:03:12.922375 kernel: audit: type=2000 audit(1757721792.006:1): state=initialized audit_enabled=0 res=1 Sep 13 00:03:12.922385 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:03:12.922396 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:03:12.922407 kernel: cpuidle: using governor menu Sep 13 00:03:12.922424 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:03:12.922435 kernel: dca service started, version 1.12.1 Sep 13 00:03:12.922446 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:03:12.922456 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 00:03:12.922467 kernel: PCI: Using configuration type 1 for base access Sep 13 00:03:12.922478 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:03:12.922489 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:03:12.922500 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:03:12.922510 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:03:12.922524 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:03:12.922535 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:03:12.922547 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:03:12.922559 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:03:12.922572 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:03:12.922583 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:03:12.922606 kernel: ACPI: Interpreter enabled Sep 13 00:03:12.922617 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:03:12.922628 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:03:12.922642 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:03:12.922653 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:03:12.922664 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:03:12.922675 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:03:12.922981 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:03:12.923184 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:03:12.923355 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:03:12.923375 kernel: PCI host bridge to bus 0000:00 Sep 13 00:03:12.923559 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:03:12.923730 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:03:12.923892 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:03:12.924040 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:03:12.924221 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:03:12.924391 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:03:12.924552 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:03:12.924792 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:03:12.924969 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:03:12.925120 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 13 00:03:12.925270 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 13 00:03:12.925420 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 13 00:03:12.925568 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:03:12.925782 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:03:12.925937 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 13 00:03:12.926088 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 13 00:03:12.926239 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 00:03:12.926417 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:03:12.926570 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:03:12.926812 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 13 00:03:12.926969 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 00:03:12.927143 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:03:12.927301 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 13 00:03:12.927454 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 13 00:03:12.927694 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 00:03:12.927894 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 13 00:03:12.928091 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:03:12.928252 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:03:12.928426 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:03:12.928578 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 13 00:03:12.928772 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 13 00:03:12.928971 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:03:12.929132 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:03:12.929152 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:03:12.929164 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:03:12.929175 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:03:12.929186 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:03:12.929197 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:03:12.929208 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:03:12.929218 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:03:12.929229 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:03:12.929240 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:03:12.929255 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:03:12.929266 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:03:12.929277 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:03:12.929288 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:03:12.929299 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:03:12.929309 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:03:12.929320 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:03:12.929331 kernel: iommu: Default domain type: Translated Sep 13 00:03:12.929341 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:03:12.929355 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:03:12.929366 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:03:12.929376 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:03:12.929387 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 00:03:12.929551 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:03:12.929738 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:03:12.929917 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:03:12.929932 kernel: vgaarb: loaded Sep 13 00:03:12.929948 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:03:12.929960 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:03:12.929971 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:03:12.929982 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:03:12.929993 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:03:12.930004 kernel: pnp: PnP ACPI init Sep 13 00:03:12.930204 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:03:12.930221 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:03:12.930237 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:03:12.930248 kernel: NET: Registered PF_INET protocol family Sep 13 00:03:12.930259 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:03:12.930270 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:03:12.930281 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:03:12.930292 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:03:12.930303 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:03:12.930314 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:03:12.930325 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:03:12.930339 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:03:12.930350 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:03:12.930361 kernel: NET: Registered PF_XDP protocol family Sep 13 00:03:12.930511 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:03:12.930681 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:03:12.930842 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:03:12.930989 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:03:12.931136 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:03:12.931287 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:03:12.931301 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:03:12.931312 kernel: Initialise system trusted keyrings Sep 13 00:03:12.931322 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:03:12.931333 kernel: Key type asymmetric registered Sep 13 00:03:12.931344 kernel: Asymmetric key parser 'x509' registered Sep 13 00:03:12.931355 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:03:12.931366 kernel: io scheduler mq-deadline registered Sep 13 00:03:12.931377 kernel: io scheduler kyber registered Sep 13 00:03:12.931392 kernel: io scheduler bfq registered Sep 13 00:03:12.931403 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:03:12.931415 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:03:12.931426 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:03:12.931437 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:03:12.931447 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:03:12.931458 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:03:12.931469 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:03:12.931480 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:03:12.931494 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:03:12.931689 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:03:12.931709 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:03:12.931875 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:03:12.932022 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:03:12 UTC (1757721792) Sep 13 00:03:12.932165 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:03:12.932180 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:03:12.932191 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:03:12.932208 kernel: Segment Routing with IPv6 Sep 13 00:03:12.932218 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:03:12.932229 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:03:12.932240 kernel: Key type dns_resolver registered Sep 13 00:03:12.932251 kernel: IPI shorthand broadcast: enabled Sep 13 00:03:12.932262 kernel: sched_clock: Marking stable (796002245, 120640937)->(962612943, -45969761) Sep 13 00:03:12.932273 kernel: registered taskstats version 1 Sep 13 00:03:12.932284 kernel: Loading compiled-in X.509 certificates Sep 13 00:03:12.932295 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:03:12.932309 kernel: Key type .fscrypt registered Sep 13 00:03:12.932320 kernel: Key type fscrypt-provisioning registered Sep 13 00:03:12.932331 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:03:12.932342 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:03:12.932353 kernel: ima: No architecture policies found Sep 13 00:03:12.932363 kernel: clk: Disabling unused clocks Sep 13 00:03:12.932374 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:03:12.932385 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:03:12.932396 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:03:12.932410 kernel: Run /init as init process Sep 13 00:03:12.932421 kernel: with arguments: Sep 13 00:03:12.932432 kernel: /init Sep 13 00:03:12.932442 kernel: with environment: Sep 13 00:03:12.932453 kernel: HOME=/ Sep 13 00:03:12.932464 kernel: TERM=linux Sep 13 00:03:12.932475 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:03:12.932488 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:03:12.932505 systemd[1]: Detected virtualization kvm. Sep 13 00:03:12.932517 systemd[1]: Detected architecture x86-64. Sep 13 00:03:12.932528 systemd[1]: Running in initrd. Sep 13 00:03:12.932540 systemd[1]: No hostname configured, using default hostname. Sep 13 00:03:12.932551 systemd[1]: Hostname set to . Sep 13 00:03:12.932563 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:03:12.932575 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:03:12.932587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:03:12.932756 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:03:12.932770 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:03:12.932799 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:03:12.932815 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:03:12.932827 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:03:12.932844 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:03:12.932856 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:03:12.932868 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:03:12.932880 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:03:12.932891 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:03:12.932903 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:03:12.932915 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:03:12.932927 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:03:12.932942 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:03:12.932954 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:03:12.932966 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:03:12.932978 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:03:12.932990 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:03:12.933002 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:03:12.933014 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:03:12.933026 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:03:12.933040 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:03:12.933052 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:03:12.933064 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:03:12.933076 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:03:12.933088 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:03:12.933100 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:03:12.933111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:03:12.933124 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:03:12.933135 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:03:12.933150 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:03:12.933190 systemd-journald[191]: Collecting audit messages is disabled. Sep 13 00:03:12.933221 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:03:12.933237 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:03:12.933249 systemd-journald[191]: Journal started Sep 13 00:03:12.933277 systemd-journald[191]: Runtime Journal (/run/log/journal/037eeeefd6ff40d4a3e621d6d090ed87) is 6.0M, max 48.4M, 42.3M free. Sep 13 00:03:12.935900 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:03:12.921501 systemd-modules-load[193]: Inserted module 'overlay' Sep 13 00:03:12.984084 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:03:12.984117 kernel: Bridge firewalling registered Sep 13 00:03:12.984128 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:03:12.955777 systemd-modules-load[193]: Inserted module 'br_netfilter' Sep 13 00:03:12.980358 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:03:12.981179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:03:12.981695 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:03:12.996907 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:03:12.999485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:03:13.001839 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:03:13.018116 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:03:13.020037 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:03:13.025841 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:03:13.026628 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:03:13.031138 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:03:13.046765 dracut-cmdline[226]: dracut-dracut-053 Sep 13 00:03:13.050571 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:03:13.072266 systemd-resolved[229]: Positive Trust Anchors: Sep 13 00:03:13.072289 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:03:13.072327 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:03:13.075253 systemd-resolved[229]: Defaulting to hostname 'linux'. Sep 13 00:03:13.076630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:03:13.081562 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:03:13.186648 kernel: SCSI subsystem initialized Sep 13 00:03:13.196632 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:03:13.208641 kernel: iscsi: registered transport (tcp) Sep 13 00:03:13.230654 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:03:13.230752 kernel: QLogic iSCSI HBA Driver Sep 13 00:03:13.293582 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:03:13.305775 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:03:13.335940 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:03:13.336040 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:03:13.337092 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:03:13.390763 kernel: raid6: avx2x4 gen() 23314 MB/s Sep 13 00:03:13.407639 kernel: raid6: avx2x2 gen() 19738 MB/s Sep 13 00:03:13.424705 kernel: raid6: avx2x1 gen() 17622 MB/s Sep 13 00:03:13.424762 kernel: raid6: using algorithm avx2x4 gen() 23314 MB/s Sep 13 00:03:13.442885 kernel: raid6: .... xor() 6778 MB/s, rmw enabled Sep 13 00:03:13.442957 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:03:13.469634 kernel: xor: automatically using best checksumming function avx Sep 13 00:03:13.630637 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:03:13.644501 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:03:13.653972 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:03:13.675008 systemd-udevd[412]: Using default interface naming scheme 'v255'. Sep 13 00:03:13.706392 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:03:13.746805 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:03:13.760234 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Sep 13 00:03:13.814150 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:03:13.843805 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:03:13.923214 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:03:13.931796 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:03:13.949759 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:03:13.984168 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:03:13.984927 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:03:13.985328 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:03:14.006375 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:03:14.019069 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:03:14.031632 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 00:03:14.031911 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:03:14.037195 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:03:14.040627 kernel: libata version 3.00 loaded. Sep 13 00:03:14.045399 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:03:14.045432 kernel: GPT:9289727 != 19775487 Sep 13 00:03:14.045447 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:03:14.045461 kernel: GPT:9289727 != 19775487 Sep 13 00:03:14.045475 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:03:14.045490 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:03:14.046040 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:03:14.064533 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:03:14.064561 kernel: AES CTR mode by8 optimization enabled Sep 13 00:03:14.046209 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:03:14.070239 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:03:14.070508 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:03:14.070522 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:03:14.070695 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:03:14.057454 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:03:14.061469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:03:14.061948 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:03:14.064473 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:03:14.080244 kernel: scsi host0: ahci Sep 13 00:03:14.080517 kernel: scsi host1: ahci Sep 13 00:03:14.077880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:03:14.095633 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (456) Sep 13 00:03:14.095691 kernel: scsi host2: ahci Sep 13 00:03:14.097624 kernel: scsi host3: ahci Sep 13 00:03:14.097891 kernel: scsi host4: ahci Sep 13 00:03:14.123879 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:03:14.190079 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (457) Sep 13 00:03:14.190117 kernel: scsi host5: ahci Sep 13 00:03:14.190361 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 13 00:03:14.190377 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 13 00:03:14.190391 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 13 00:03:14.190418 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 13 00:03:14.190432 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 13 00:03:14.190445 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 13 00:03:14.192375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:03:14.202086 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:03:14.229174 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:03:14.234223 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:03:14.235007 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:03:14.254915 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:03:14.279158 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:03:14.305562 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:03:14.443631 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:03:14.443727 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:03:14.445332 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:03:14.445420 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:03:14.446617 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:03:14.447628 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:03:14.448628 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:03:14.449716 kernel: ata3.00: applying bridge limits Sep 13 00:03:14.449738 kernel: ata3.00: configured for UDMA/100 Sep 13 00:03:14.450618 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:03:14.614333 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:03:14.614879 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:03:14.630928 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:03:14.709582 disk-uuid[558]: Primary Header is updated. Sep 13 00:03:14.709582 disk-uuid[558]: Secondary Entries is updated. Sep 13 00:03:14.709582 disk-uuid[558]: Secondary Header is updated. Sep 13 00:03:14.746632 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:03:14.751622 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:03:15.760648 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:03:15.761036 disk-uuid[577]: The operation has completed successfully. Sep 13 00:03:15.792096 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:03:15.792263 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:03:15.849808 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:03:15.907140 sh[591]: Success Sep 13 00:03:15.929648 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:03:15.969146 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:03:15.981511 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:03:15.984817 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:03:16.002503 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:03:16.002560 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:03:16.002571 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:03:16.003784 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:03:16.004732 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:03:16.010675 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:03:16.013266 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:03:16.025012 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:03:16.028632 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:03:16.037259 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:03:16.037324 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:03:16.037335 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:03:16.041632 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:03:16.051331 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:03:16.053318 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:03:16.159682 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:03:16.182858 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:03:16.247018 systemd-networkd[769]: lo: Link UP Sep 13 00:03:16.247030 systemd-networkd[769]: lo: Gained carrier Sep 13 00:03:16.248819 systemd-networkd[769]: Enumeration completed Sep 13 00:03:16.248963 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:03:16.249287 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:03:16.249291 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:03:16.272678 systemd-networkd[769]: eth0: Link UP Sep 13 00:03:16.272681 systemd-networkd[769]: eth0: Gained carrier Sep 13 00:03:16.272688 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:03:16.274524 systemd[1]: Reached target network.target - Network. Sep 13 00:03:16.283661 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:03:16.439114 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:03:16.450772 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:03:16.504670 ignition[774]: Ignition 2.19.0 Sep 13 00:03:16.504688 ignition[774]: Stage: fetch-offline Sep 13 00:03:16.504748 ignition[774]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:03:16.504764 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:03:16.504885 ignition[774]: parsed url from cmdline: "" Sep 13 00:03:16.504890 ignition[774]: no config URL provided Sep 13 00:03:16.504898 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:03:16.504921 ignition[774]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:03:16.504961 ignition[774]: op(1): [started] loading QEMU firmware config module Sep 13 00:03:16.504969 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:03:16.533077 ignition[774]: op(1): [finished] loading QEMU firmware config module Sep 13 00:03:16.580020 ignition[774]: parsing config with SHA512: 9f1f26850339ae810d2d8c8529210d35566c60f33108d2af6b9574a6295fea4989f3d14d28b0707b0601df3a5fc1e895d6b581bb47d37cacd0f548b883acdb3b Sep 13 00:03:16.585822 unknown[774]: fetched base config from "system" Sep 13 00:03:16.585840 unknown[774]: fetched user config from "qemu" Sep 13 00:03:16.586418 ignition[774]: fetch-offline: fetch-offline passed Sep 13 00:03:16.586504 ignition[774]: Ignition finished successfully Sep 13 00:03:16.589153 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:03:16.589859 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:03:16.623051 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:03:16.640733 ignition[784]: Ignition 2.19.0 Sep 13 00:03:16.640748 ignition[784]: Stage: kargs Sep 13 00:03:16.640947 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:03:16.640959 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:03:16.641997 ignition[784]: kargs: kargs passed Sep 13 00:03:16.642062 ignition[784]: Ignition finished successfully Sep 13 00:03:16.649495 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:03:16.661815 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:03:16.674586 ignition[793]: Ignition 2.19.0 Sep 13 00:03:16.674667 ignition[793]: Stage: disks Sep 13 00:03:16.674830 ignition[793]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:03:16.674842 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:03:16.675913 ignition[793]: disks: disks passed Sep 13 00:03:16.675976 ignition[793]: Ignition finished successfully Sep 13 00:03:16.682452 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:03:16.683249 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:03:16.684908 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:03:16.685235 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:03:16.685551 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:03:16.691507 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:03:16.707906 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:03:16.775091 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:03:16.922543 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:03:16.933883 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:03:17.134678 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:03:17.136003 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:03:17.137837 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:03:17.152748 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:03:17.156678 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:03:17.157484 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:03:17.157543 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:03:17.178737 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Sep 13 00:03:17.178776 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:03:17.157571 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:03:17.182588 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:03:17.182651 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:03:17.183640 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:03:17.184959 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:03:17.210552 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:03:17.224758 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:03:17.266008 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:03:17.272018 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:03:17.277726 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:03:17.283283 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:03:17.421978 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:03:17.436910 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:03:17.458971 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:03:17.467307 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:03:17.468936 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:03:17.491126 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:03:17.609445 ignition[929]: INFO : Ignition 2.19.0 Sep 13 00:03:17.609445 ignition[929]: INFO : Stage: mount Sep 13 00:03:17.611439 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:03:17.611439 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:03:17.611439 ignition[929]: INFO : mount: mount passed Sep 13 00:03:17.611439 ignition[929]: INFO : Ignition finished successfully Sep 13 00:03:17.613707 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:03:17.624892 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:03:17.971024 systemd-networkd[769]: eth0: Gained IPv6LL Sep 13 00:03:18.144769 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:03:18.154822 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Sep 13 00:03:18.154859 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:03:18.154871 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:03:18.156734 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:03:18.159623 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:03:18.161480 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:03:18.219993 ignition[956]: INFO : Ignition 2.19.0 Sep 13 00:03:18.219993 ignition[956]: INFO : Stage: files Sep 13 00:03:18.221910 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:03:18.221910 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:03:18.221910 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:03:18.225348 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:03:18.226781 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:03:18.229504 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:03:18.231197 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:03:18.232794 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:03:18.231756 unknown[956]: wrote ssh authorized keys file for user: core Sep 13 00:03:18.267058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:03:18.267058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:03:18.267058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:03:18.267058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:03:18.309633 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:03:18.490084 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:03:18.490084 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:03:18.494468 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:03:18.798753 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 13 00:03:19.066499 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:03:19.068565 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:03:19.311976 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 13 00:03:20.119002 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:03:20.119002 ignition[956]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 13 00:03:20.124426 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:03:20.126862 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:03:20.126862 ignition[956]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 13 00:03:20.126862 ignition[956]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 13 00:03:20.126862 ignition[956]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:03:20.126862 ignition[956]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:03:20.126862 ignition[956]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 13 00:03:20.126862 ignition[956]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 13 00:03:20.126862 ignition[956]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:03:20.126862 ignition[956]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:03:20.126862 ignition[956]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 13 00:03:20.126862 ignition[956]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:03:20.176026 ignition[956]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:03:20.185946 ignition[956]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:03:20.197347 ignition[956]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:03:20.197347 ignition[956]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:03:20.197347 ignition[956]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:03:20.197347 ignition[956]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:03:20.197347 ignition[956]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:03:20.197347 ignition[956]: INFO : files: files passed Sep 13 00:03:20.197347 ignition[956]: INFO : Ignition finished successfully Sep 13 00:03:20.210507 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:03:20.223749 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:03:20.225908 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:03:20.231091 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:03:20.231311 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:03:20.238771 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:03:20.242317 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:03:20.242317 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:03:20.250946 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:03:20.245730 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:03:20.248198 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:03:20.262987 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:03:20.303694 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:03:20.303841 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:03:20.310507 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:03:20.312911 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:03:20.313489 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:03:20.315068 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:03:20.344982 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:03:20.357032 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:03:20.368843 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:03:20.370394 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:03:20.372702 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:03:20.374980 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:03:20.375170 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:03:20.377514 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:03:20.379642 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:03:20.381940 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:03:20.384356 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:03:20.386777 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:03:20.389474 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:03:20.392012 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:03:20.394781 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:03:20.397196 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:03:20.399786 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:03:20.401935 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:03:20.402141 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:03:20.404923 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:03:20.406380 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:03:20.408756 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:03:20.408942 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:03:20.411053 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:03:20.411240 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:03:20.413456 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:03:20.413636 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:03:20.415664 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:03:20.417411 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:03:20.420675 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:03:20.421229 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:03:20.421572 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:03:20.421929 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:03:20.422049 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:03:20.422442 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:03:20.422574 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:03:20.422915 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:03:20.423036 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:03:20.423387 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:03:20.423514 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:03:20.439007 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:03:20.442302 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:03:20.443376 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:03:20.443575 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:03:20.446659 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:03:20.446934 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:03:20.455663 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:03:20.455831 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:03:20.462737 ignition[1010]: INFO : Ignition 2.19.0 Sep 13 00:03:20.462737 ignition[1010]: INFO : Stage: umount Sep 13 00:03:20.464807 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:03:20.464807 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:03:20.464807 ignition[1010]: INFO : umount: umount passed Sep 13 00:03:20.464807 ignition[1010]: INFO : Ignition finished successfully Sep 13 00:03:20.466120 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:03:20.466277 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:03:20.469585 systemd[1]: Stopped target network.target - Network. Sep 13 00:03:20.470681 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:03:20.470774 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:03:20.472766 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:03:20.472822 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:03:20.474909 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:03:20.474966 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:03:20.477010 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:03:20.477076 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:03:20.479315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:03:20.481832 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:03:20.484698 systemd-networkd[769]: eth0: DHCPv6 lease lost Sep 13 00:03:20.485220 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:03:20.487067 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:03:20.487255 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:03:20.490831 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:03:20.491013 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:03:20.494581 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:03:20.494803 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:03:20.499516 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:03:20.499678 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:03:20.502132 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:03:20.502201 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:03:20.517773 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:03:20.518932 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:03:20.519023 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:03:20.521476 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:03:20.521561 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:03:20.524450 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:03:20.524519 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:03:20.527316 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:03:20.527379 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:03:20.529054 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:03:20.543300 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:03:20.544629 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:03:20.547631 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:03:20.547890 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:03:20.552816 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:03:20.552935 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:03:20.553680 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:03:20.553741 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:03:20.554195 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:03:20.554270 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:03:20.555440 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:03:20.555510 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:03:20.563970 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:03:20.564128 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:03:20.573962 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:03:20.574475 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:03:20.574580 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:03:20.574888 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:03:20.574950 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:03:20.575176 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:03:20.575229 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:03:20.575498 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:03:20.575564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:03:20.585350 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:03:20.585541 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:03:20.588239 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:03:20.605865 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:03:20.615807 systemd[1]: Switching root. Sep 13 00:03:20.648028 systemd-journald[191]: Journal stopped Sep 13 00:03:22.391397 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Sep 13 00:03:22.391482 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:03:22.391500 kernel: SELinux: policy capability open_perms=1 Sep 13 00:03:22.391513 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:03:22.391525 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:03:22.391544 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:03:22.391558 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:03:22.391572 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:03:22.391585 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:03:22.391612 kernel: audit: type=1403 audit(1757721801.318:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:03:22.391632 systemd[1]: Successfully loaded SELinux policy in 47.495ms. Sep 13 00:03:22.391663 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.582ms. Sep 13 00:03:22.391680 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:03:22.391692 systemd[1]: Detected virtualization kvm. Sep 13 00:03:22.391705 systemd[1]: Detected architecture x86-64. Sep 13 00:03:22.391717 systemd[1]: Detected first boot. Sep 13 00:03:22.391734 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:03:22.391746 zram_generator::config[1073]: No configuration found. Sep 13 00:03:22.391759 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:03:22.391775 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:03:22.391787 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:03:22.391800 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:03:22.391815 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:03:22.391827 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:03:22.391840 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:03:22.391852 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:03:22.391865 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:03:22.391882 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:03:22.391898 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:03:22.391913 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:03:22.391925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:03:22.391937 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:03:22.391950 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:03:22.391962 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:03:22.391975 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:03:22.391987 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:03:22.392002 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:03:22.392014 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:03:22.392026 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:03:22.392038 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:03:22.392051 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:03:22.392063 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:03:22.392075 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:03:22.392088 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:03:22.392102 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:03:22.392115 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:03:22.392127 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:03:22.392139 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:03:22.392152 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:03:22.392164 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:03:22.392176 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:03:22.392188 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:03:22.392200 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:03:22.392215 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:03:22.392228 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:03:22.392240 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:03:22.392253 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:03:22.392265 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:03:22.392278 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:03:22.392290 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:03:22.392302 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:03:22.392314 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:03:22.392329 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:03:22.392342 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:03:22.392354 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:03:22.392366 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:03:22.392378 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:03:22.392391 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:03:22.392404 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:03:22.392417 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:03:22.392432 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:03:22.392444 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:03:22.392457 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:03:22.392478 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:03:22.392491 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:03:22.392504 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:03:22.392516 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:03:22.392528 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:03:22.392543 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:03:22.392555 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:03:22.392567 kernel: ACPI: bus type drm_connector registered Sep 13 00:03:22.392579 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:03:22.392591 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:03:22.392642 kernel: fuse: init (API version 7.39) Sep 13 00:03:22.392654 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:03:22.392666 kernel: loop: module loaded Sep 13 00:03:22.392677 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:03:22.392715 systemd-journald[1161]: Collecting audit messages is disabled. Sep 13 00:03:22.392737 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:03:22.392750 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:03:22.392762 systemd-journald[1161]: Journal started Sep 13 00:03:22.392787 systemd-journald[1161]: Runtime Journal (/run/log/journal/037eeeefd6ff40d4a3e621d6d090ed87) is 6.0M, max 48.4M, 42.3M free. Sep 13 00:03:22.393994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:03:22.397228 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:03:22.399066 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:03:22.399357 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:03:22.401047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:03:22.401314 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:03:22.402962 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:03:22.403188 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:03:22.404726 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:03:22.405008 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:03:22.406548 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:03:22.408228 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:03:22.410179 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:03:22.427506 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:03:22.437712 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:03:22.440056 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:03:22.441274 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:03:22.444812 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:03:22.449959 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:03:22.451186 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:03:22.456222 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:03:22.457642 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:03:22.462215 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:03:22.467108 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:03:22.471437 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:03:22.473806 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:03:22.479855 systemd-journald[1161]: Time spent on flushing to /var/log/journal/037eeeefd6ff40d4a3e621d6d090ed87 is 63.394ms for 947 entries. Sep 13 00:03:22.479855 systemd-journald[1161]: System Journal (/var/log/journal/037eeeefd6ff40d4a3e621d6d090ed87) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:03:22.561539 systemd-journald[1161]: Received client request to flush runtime journal. Sep 13 00:03:22.479504 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:03:22.484082 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:03:22.528805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:03:22.544956 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:03:22.558289 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:03:22.563395 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:03:22.571700 udevadm[1217]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:03:22.574444 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Sep 13 00:03:22.574482 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Sep 13 00:03:22.582907 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:03:22.592082 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:03:22.629924 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:03:22.638914 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:03:22.672297 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Sep 13 00:03:22.672364 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Sep 13 00:03:22.680362 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:03:23.747302 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:03:23.758768 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:03:23.797697 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Sep 13 00:03:23.816685 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:03:23.834734 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:03:23.855762 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:03:23.869256 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 13 00:03:23.876617 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1246) Sep 13 00:03:23.966282 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:03:23.965589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:03:23.970999 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:03:23.971984 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:03:23.995708 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:03:23.996022 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:03:23.997697 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:03:24.008618 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:03:24.027208 systemd-networkd[1250]: lo: Link UP Sep 13 00:03:24.027221 systemd-networkd[1250]: lo: Gained carrier Sep 13 00:03:24.029295 systemd-networkd[1250]: Enumeration completed Sep 13 00:03:24.029449 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:03:24.030465 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:03:24.030469 systemd-networkd[1250]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:03:24.031751 systemd-networkd[1250]: eth0: Link UP Sep 13 00:03:24.031757 systemd-networkd[1250]: eth0: Gained carrier Sep 13 00:03:24.031771 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:03:24.066623 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:03:24.109861 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:03:24.111564 systemd-networkd[1250]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:03:24.130936 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:03:24.218736 kernel: kvm_amd: TSC scaling supported Sep 13 00:03:24.218838 kernel: kvm_amd: Nested Virtualization enabled Sep 13 00:03:24.218861 kernel: kvm_amd: Nested Paging enabled Sep 13 00:03:24.220209 kernel: kvm_amd: LBR virtualization supported Sep 13 00:03:24.220241 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 00:03:24.221010 kernel: kvm_amd: Virtual GIF supported Sep 13 00:03:24.246660 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:03:24.291286 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:03:24.312823 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:03:24.315080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:03:24.329777 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:03:24.376376 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:03:24.378219 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:03:24.391026 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:03:24.397062 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:03:24.434826 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:03:24.436747 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:03:24.438197 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:03:24.438233 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:03:24.439303 systemd[1]: Reached target machines.target - Containers. Sep 13 00:03:24.442067 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:03:24.454825 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:03:24.457693 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:03:24.458867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:03:24.459889 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:03:24.462759 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:03:24.468193 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:03:24.470546 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:03:24.479770 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:03:24.484626 kernel: loop0: detected capacity change from 0 to 142488 Sep 13 00:03:24.494144 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:03:24.497242 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:03:24.511647 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:03:24.534642 kernel: loop1: detected capacity change from 0 to 140768 Sep 13 00:03:24.579736 kernel: loop2: detected capacity change from 0 to 221472 Sep 13 00:03:24.613638 kernel: loop3: detected capacity change from 0 to 142488 Sep 13 00:03:24.629632 kernel: loop4: detected capacity change from 0 to 140768 Sep 13 00:03:24.640623 kernel: loop5: detected capacity change from 0 to 221472 Sep 13 00:03:24.647973 (sd-merge)[1307]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:03:24.649459 (sd-merge)[1307]: Merged extensions into '/usr'. Sep 13 00:03:24.676899 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:03:24.676918 systemd[1]: Reloading... Sep 13 00:03:24.764624 zram_generator::config[1331]: No configuration found. Sep 13 00:03:24.904092 ldconfig[1291]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:03:24.988245 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:03:25.060028 systemd[1]: Reloading finished in 382 ms. Sep 13 00:03:25.082757 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:03:25.084516 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:03:25.115074 systemd[1]: Starting ensure-sysext.service... Sep 13 00:03:25.118182 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:03:25.121522 systemd[1]: Reloading requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:03:25.121540 systemd[1]: Reloading... Sep 13 00:03:25.151258 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:03:25.151711 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:03:25.152810 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:03:25.153147 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Sep 13 00:03:25.153236 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Sep 13 00:03:25.161909 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:03:25.161920 systemd-tmpfiles[1380]: Skipping /boot Sep 13 00:03:25.175690 zram_generator::config[1404]: No configuration found. Sep 13 00:03:25.184047 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:03:25.184131 systemd-tmpfiles[1380]: Skipping /boot Sep 13 00:03:25.322216 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:03:25.416588 systemd[1]: Reloading finished in 294 ms. Sep 13 00:03:25.450825 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:03:25.458923 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:03:25.467589 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:03:25.471526 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:03:25.478962 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:03:25.492033 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:03:25.524354 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:03:25.524685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:03:25.533371 augenrules[1475]: No rules Sep 13 00:03:25.533788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:03:25.540126 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:03:25.544105 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:03:25.545929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:03:25.546268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:03:25.551164 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:03:25.555191 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:03:25.560155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:03:25.560506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:03:25.578306 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:03:25.578670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:03:25.581869 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:03:25.582220 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:03:25.587152 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:03:25.613360 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:03:25.623849 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:03:25.624209 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:03:25.640050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:03:25.647299 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:03:25.649389 systemd-resolved[1458]: Positive Trust Anchors: Sep 13 00:03:25.649406 systemd-resolved[1458]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:03:25.649439 systemd-resolved[1458]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:03:25.650413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:03:25.651793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:03:25.654521 systemd-resolved[1458]: Defaulting to hostname 'linux'. Sep 13 00:03:25.655093 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:03:25.657724 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:03:25.657944 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:03:25.659363 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:03:25.661734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:03:25.662013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:03:25.663851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:03:25.664118 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:03:25.675923 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:03:25.676178 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:03:25.683252 systemd[1]: Reached target network.target - Network. Sep 13 00:03:25.684384 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:03:25.686034 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:03:25.686402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:03:25.701049 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:03:25.703968 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:03:25.710367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:03:25.713285 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:03:25.714588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:03:25.714911 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:03:25.715117 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:03:25.717685 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:03:25.719995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:03:25.720288 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:03:25.722769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:03:25.723079 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:03:25.727396 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:03:25.727704 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:03:25.730166 systemd[1]: Finished ensure-sysext.service. Sep 13 00:03:25.732404 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:03:25.732994 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:03:25.746483 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:03:25.746609 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:03:25.756954 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:03:25.778811 systemd-networkd[1250]: eth0: Gained IPv6LL Sep 13 00:03:25.782840 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:03:25.784892 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:03:25.826524 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:03:25.828304 systemd-timesyncd[1525]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:03:25.828367 systemd-timesyncd[1525]: Initial clock synchronization to Sat 2025-09-13 00:03:26.128887 UTC. Sep 13 00:03:25.828459 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:03:25.829697 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:03:25.831039 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:03:25.832608 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:03:25.834108 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:03:25.834150 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:03:25.835230 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:03:25.836739 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:03:25.837982 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:03:25.839232 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:03:25.841484 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:03:25.844714 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:03:25.847778 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:03:25.856514 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:03:25.857817 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:03:25.859073 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:03:25.860362 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:03:25.860420 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:03:25.860467 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:03:25.862591 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:03:25.866281 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:03:25.869900 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:03:25.875742 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:03:25.878818 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:03:25.882797 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:03:25.891781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:03:25.896160 dbus-daemon[1533]: [system] SELinux support is enabled Sep 13 00:03:25.897806 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:03:25.899996 jq[1535]: false Sep 13 00:03:25.901789 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:03:25.911788 extend-filesystems[1537]: Found loop3 Sep 13 00:03:25.911788 extend-filesystems[1537]: Found loop4 Sep 13 00:03:25.917326 extend-filesystems[1537]: Found loop5 Sep 13 00:03:25.917326 extend-filesystems[1537]: Found sr0 Sep 13 00:03:25.917326 extend-filesystems[1537]: Found vda Sep 13 00:03:25.917326 extend-filesystems[1537]: Found vda1 Sep 13 00:03:25.917326 extend-filesystems[1537]: Found vda2 Sep 13 00:03:25.917326 extend-filesystems[1537]: Found vda3 Sep 13 00:03:25.917326 extend-filesystems[1537]: Found usr Sep 13 00:03:25.917326 extend-filesystems[1537]: Found vda4 Sep 13 00:03:25.917326 extend-filesystems[1537]: Found vda6 Sep 13 00:03:25.917326 extend-filesystems[1537]: Found vda7 Sep 13 00:03:25.917326 extend-filesystems[1537]: Found vda9 Sep 13 00:03:25.917326 extend-filesystems[1537]: Checking size of /dev/vda9 Sep 13 00:03:25.911782 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:03:25.918961 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:03:25.936791 extend-filesystems[1537]: Resized partition /dev/vda9 Sep 13 00:03:25.943941 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1255) Sep 13 00:03:25.944027 extend-filesystems[1560]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:03:25.936797 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:03:25.950043 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:03:25.949180 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:03:25.951933 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:03:25.959857 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:03:25.968413 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:03:25.973719 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:03:25.974303 jq[1568]: true Sep 13 00:03:25.992754 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:03:25.993171 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:03:25.996516 update_engine[1565]: I20250913 00:03:25.996405 1565 main.cc:92] Flatcar Update Engine starting Sep 13 00:03:25.996616 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:03:25.997145 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:03:25.999625 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:03:26.004319 update_engine[1565]: I20250913 00:03:26.004240 1565 update_check_scheduler.cc:74] Next update check in 3m7s Sep 13 00:03:26.021302 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:03:26.025668 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:03:26.026014 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:03:26.034965 extend-filesystems[1560]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:03:26.034965 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:03:26.034965 extend-filesystems[1560]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:03:26.041572 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Sep 13 00:03:26.037263 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:03:26.052924 jq[1581]: true Sep 13 00:03:26.037608 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:03:26.042149 systemd-logind[1563]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:03:26.042171 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:03:26.048473 systemd-logind[1563]: New seat seat0. Sep 13 00:03:26.061035 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:03:26.063445 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:03:26.068702 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:03:26.069122 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:03:26.089236 dbus-daemon[1533]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:03:26.091132 tar[1578]: linux-amd64/helm Sep 13 00:03:26.097039 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:03:26.101371 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:03:26.101546 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:03:26.101691 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:03:26.111201 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:03:26.111339 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:03:26.113718 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:03:26.126062 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:03:26.127100 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:03:26.141903 bash[1616]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:03:26.143591 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:03:26.148080 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:03:26.243971 locksmithd[1617]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:03:26.254693 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:03:26.270062 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:03:26.293442 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:03:26.297041 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:03:26.303382 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:03:26.400425 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:03:26.411083 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:03:26.416149 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:03:26.418027 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:03:26.579746 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:03:26.619396 systemd[1]: Started sshd@0-10.0.0.70:22-10.0.0.1:36206.service - OpenSSH per-connection server daemon (10.0.0.1:36206). Sep 13 00:03:26.693042 containerd[1583]: time="2025-09-13T00:03:26.692874516Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:03:26.790837 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 36206 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:03:26.791611 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:26.807283 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:03:26.809270 systemd-logind[1563]: New session 1 of user core. Sep 13 00:03:26.815975 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:03:26.830343 containerd[1583]: time="2025-09-13T00:03:26.830239364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:03:26.833067 containerd[1583]: time="2025-09-13T00:03:26.833025279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:03:26.833067 containerd[1583]: time="2025-09-13T00:03:26.833061808Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:03:26.833151 containerd[1583]: time="2025-09-13T00:03:26.833078014Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:03:26.833325 containerd[1583]: time="2025-09-13T00:03:26.833296056Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:03:26.833389 containerd[1583]: time="2025-09-13T00:03:26.833329301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:03:26.833457 containerd[1583]: time="2025-09-13T00:03:26.833431893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:03:26.833457 containerd[1583]: time="2025-09-13T00:03:26.833451571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:03:26.833818 containerd[1583]: time="2025-09-13T00:03:26.833791935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:03:26.833818 containerd[1583]: time="2025-09-13T00:03:26.833814816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:03:26.833890 containerd[1583]: time="2025-09-13T00:03:26.833830867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:03:26.833890 containerd[1583]: time="2025-09-13T00:03:26.833845025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:03:26.833988 containerd[1583]: time="2025-09-13T00:03:26.833963553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:03:26.834568 containerd[1583]: time="2025-09-13T00:03:26.834244217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:03:26.834568 containerd[1583]: time="2025-09-13T00:03:26.834419171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:03:26.834568 containerd[1583]: time="2025-09-13T00:03:26.834433361Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:03:26.834568 containerd[1583]: time="2025-09-13T00:03:26.834536285Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:03:26.834713 containerd[1583]: time="2025-09-13T00:03:26.834609822Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:03:26.848791 containerd[1583]: time="2025-09-13T00:03:26.848662920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:03:26.849198 containerd[1583]: time="2025-09-13T00:03:26.849174663Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:03:26.849394 containerd[1583]: time="2025-09-13T00:03:26.849307796Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:03:26.849687 containerd[1583]: time="2025-09-13T00:03:26.849578834Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:03:26.850092 containerd[1583]: time="2025-09-13T00:03:26.850023256Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:03:26.851263 containerd[1583]: time="2025-09-13T00:03:26.851234087Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:03:26.852602 containerd[1583]: time="2025-09-13T00:03:26.852548819Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:03:26.853181 containerd[1583]: time="2025-09-13T00:03:26.853116281Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:03:26.853377 containerd[1583]: time="2025-09-13T00:03:26.853350540Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:03:26.853551 containerd[1583]: time="2025-09-13T00:03:26.853523508Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:03:26.853716 containerd[1583]: time="2025-09-13T00:03:26.853679293Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:03:26.854353 containerd[1583]: time="2025-09-13T00:03:26.854318691Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:03:26.854623 containerd[1583]: time="2025-09-13T00:03:26.854495090Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:03:26.855019 containerd[1583]: time="2025-09-13T00:03:26.854915218Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:03:26.855369 containerd[1583]: time="2025-09-13T00:03:26.855234802Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:03:26.855697 containerd[1583]: time="2025-09-13T00:03:26.855565842Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:03:26.856066 containerd[1583]: time="2025-09-13T00:03:26.855955575Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:03:26.856417 containerd[1583]: time="2025-09-13T00:03:26.856271323Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:03:26.856989 containerd[1583]: time="2025-09-13T00:03:26.856883402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.857307 containerd[1583]: time="2025-09-13T00:03:26.857245493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.857695 containerd[1583]: time="2025-09-13T00:03:26.857553456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.858084 containerd[1583]: time="2025-09-13T00:03:26.857958012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.858293584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.858494931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.858562896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.858657536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.858734326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.858817396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.858887627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.858967120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.859033983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.859158187Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.859336935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.859393070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.859675 containerd[1583]: time="2025-09-13T00:03:26.859471347Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:03:26.860971 containerd[1583]: time="2025-09-13T00:03:26.860878566Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:03:26.861348 containerd[1583]: time="2025-09-13T00:03:26.861219658Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:03:26.861639 containerd[1583]: time="2025-09-13T00:03:26.861505999Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:03:26.861872 containerd[1583]: time="2025-09-13T00:03:26.861809242Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:03:26.862074 containerd[1583]: time="2025-09-13T00:03:26.861998250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.862402 containerd[1583]: time="2025-09-13T00:03:26.862254766Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:03:26.862850 containerd[1583]: time="2025-09-13T00:03:26.862767631Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:03:26.863129 containerd[1583]: time="2025-09-13T00:03:26.863022483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:03:26.865707 containerd[1583]: time="2025-09-13T00:03:26.865143833Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:03:26.865707 containerd[1583]: time="2025-09-13T00:03:26.865302705Z" level=info msg="Connect containerd service" Sep 13 00:03:26.865707 containerd[1583]: time="2025-09-13T00:03:26.865569419Z" level=info msg="using legacy CRI server" Sep 13 00:03:26.869464 containerd[1583]: time="2025-09-13T00:03:26.865612133Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:03:26.869464 containerd[1583]: time="2025-09-13T00:03:26.868473029Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:03:26.871108 containerd[1583]: time="2025-09-13T00:03:26.871057201Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:03:26.872362 containerd[1583]: time="2025-09-13T00:03:26.871541738Z" level=info msg="Start subscribing containerd event" Sep 13 00:03:26.872530 containerd[1583]: time="2025-09-13T00:03:26.872485284Z" level=info msg="Start recovering state" Sep 13 00:03:26.872930 containerd[1583]: time="2025-09-13T00:03:26.872880962Z" level=info msg="Start event monitor" Sep 13 00:03:26.873160 containerd[1583]: time="2025-09-13T00:03:26.873121563Z" level=info msg="Start snapshots syncer" Sep 13 00:03:26.873245 containerd[1583]: time="2025-09-13T00:03:26.873169122Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:03:26.873245 containerd[1583]: time="2025-09-13T00:03:26.873206035Z" level=info msg="Start streaming server" Sep 13 00:03:26.875178 containerd[1583]: time="2025-09-13T00:03:26.875139041Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:03:26.875278 containerd[1583]: time="2025-09-13T00:03:26.875254409Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:03:26.875599 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:03:26.876770 containerd[1583]: time="2025-09-13T00:03:26.876733980Z" level=info msg="containerd successfully booted in 0.185851s" Sep 13 00:03:26.893953 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:03:26.923246 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:03:26.945922 (systemd)[1659]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:03:27.074199 tar[1578]: linux-amd64/LICENSE Sep 13 00:03:27.074199 tar[1578]: linux-amd64/README.md Sep 13 00:03:27.111588 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:03:27.157095 systemd[1659]: Queued start job for default target default.target. Sep 13 00:03:27.157731 systemd[1659]: Created slice app.slice - User Application Slice. Sep 13 00:03:27.157766 systemd[1659]: Reached target paths.target - Paths. Sep 13 00:03:27.157785 systemd[1659]: Reached target timers.target - Timers. Sep 13 00:03:27.168019 systemd[1659]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:03:27.178479 systemd[1659]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:03:27.178568 systemd[1659]: Reached target sockets.target - Sockets. Sep 13 00:03:27.178584 systemd[1659]: Reached target basic.target - Basic System. Sep 13 00:03:27.178841 systemd[1659]: Reached target default.target - Main User Target. Sep 13 00:03:27.178891 systemd[1659]: Startup finished in 218ms. Sep 13 00:03:27.179134 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:03:27.194042 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:03:27.264238 systemd[1]: Started sshd@1-10.0.0.70:22-10.0.0.1:36222.service - OpenSSH per-connection server daemon (10.0.0.1:36222). Sep 13 00:03:27.309558 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 36222 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:03:27.311948 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:27.317460 systemd-logind[1563]: New session 2 of user core. Sep 13 00:03:27.349211 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:03:27.410662 sshd[1676]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:27.430145 systemd[1]: Started sshd@2-10.0.0.70:22-10.0.0.1:36230.service - OpenSSH per-connection server daemon (10.0.0.1:36230). Sep 13 00:03:27.432493 systemd[1]: sshd@1-10.0.0.70:22-10.0.0.1:36222.service: Deactivated successfully. Sep 13 00:03:27.436178 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:03:27.436886 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:03:27.439506 systemd-logind[1563]: Removed session 2. Sep 13 00:03:27.473351 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 36230 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:03:27.475744 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:27.481465 systemd-logind[1563]: New session 3 of user core. Sep 13 00:03:27.489022 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:03:27.551290 sshd[1681]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:27.555440 systemd[1]: sshd@2-10.0.0.70:22-10.0.0.1:36230.service: Deactivated successfully. Sep 13 00:03:27.558758 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:03:27.559079 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:03:27.561170 systemd-logind[1563]: Removed session 3. Sep 13 00:03:27.793815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:03:27.795905 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:03:27.799264 systemd[1]: Startup finished in 9.540s (kernel) + 6.527s (userspace) = 16.067s. Sep 13 00:03:27.810867 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:03:28.448928 kubelet[1700]: E0913 00:03:28.448837 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:03:28.455287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:03:28.456058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:03:37.747012 systemd[1]: Started sshd@3-10.0.0.70:22-10.0.0.1:42732.service - OpenSSH per-connection server daemon (10.0.0.1:42732). Sep 13 00:03:37.788215 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 42732 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:03:37.790522 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:37.796697 systemd-logind[1563]: New session 4 of user core. Sep 13 00:03:37.812116 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:03:37.874158 sshd[1713]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:37.883975 systemd[1]: Started sshd@4-10.0.0.70:22-10.0.0.1:42736.service - OpenSSH per-connection server daemon (10.0.0.1:42736). Sep 13 00:03:37.884597 systemd[1]: sshd@3-10.0.0.70:22-10.0.0.1:42732.service: Deactivated successfully. Sep 13 00:03:37.887643 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:03:37.888566 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:03:37.890440 systemd-logind[1563]: Removed session 4. Sep 13 00:03:37.922976 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 42736 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:03:37.924963 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:37.930473 systemd-logind[1563]: New session 5 of user core. Sep 13 00:03:37.941049 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:03:37.994465 sshd[1718]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:38.002875 systemd[1]: Started sshd@5-10.0.0.70:22-10.0.0.1:42744.service - OpenSSH per-connection server daemon (10.0.0.1:42744). Sep 13 00:03:38.003440 systemd[1]: sshd@4-10.0.0.70:22-10.0.0.1:42736.service: Deactivated successfully. Sep 13 00:03:38.006336 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:03:38.007119 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:03:38.008189 systemd-logind[1563]: Removed session 5. Sep 13 00:03:38.037143 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 42744 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:03:38.038917 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:38.043871 systemd-logind[1563]: New session 6 of user core. Sep 13 00:03:38.059032 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:03:38.118702 sshd[1726]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:38.129003 systemd[1]: Started sshd@6-10.0.0.70:22-10.0.0.1:42746.service - OpenSSH per-connection server daemon (10.0.0.1:42746). Sep 13 00:03:38.129578 systemd[1]: sshd@5-10.0.0.70:22-10.0.0.1:42744.service: Deactivated successfully. Sep 13 00:03:38.132325 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:03:38.133275 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:03:38.134668 systemd-logind[1563]: Removed session 6. Sep 13 00:03:38.166205 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 42746 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:03:38.168426 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:38.173316 systemd-logind[1563]: New session 7 of user core. Sep 13 00:03:38.183248 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:03:38.245226 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:03:38.245625 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:03:38.267648 sudo[1741]: pam_unix(sudo:session): session closed for user root Sep 13 00:03:38.270326 sshd[1734]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:38.284064 systemd[1]: Started sshd@7-10.0.0.70:22-10.0.0.1:42762.service - OpenSSH per-connection server daemon (10.0.0.1:42762). Sep 13 00:03:38.284833 systemd[1]: sshd@6-10.0.0.70:22-10.0.0.1:42746.service: Deactivated successfully. Sep 13 00:03:38.288497 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:03:38.290093 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:03:38.291576 systemd-logind[1563]: Removed session 7. Sep 13 00:03:38.319288 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 42762 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:03:38.321220 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:38.325754 systemd-logind[1563]: New session 8 of user core. Sep 13 00:03:38.334001 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:03:38.391135 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:03:38.391498 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:03:38.395290 sudo[1751]: pam_unix(sudo:session): session closed for user root Sep 13 00:03:38.401588 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:03:38.401957 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:03:38.422861 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:03:38.424821 auditctl[1754]: No rules Sep 13 00:03:38.426320 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:03:38.426717 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:03:38.429013 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:03:38.466404 augenrules[1773]: No rules Sep 13 00:03:38.468396 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:03:38.469645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:03:38.469656 sudo[1750]: pam_unix(sudo:session): session closed for user root Sep 13 00:03:38.471921 sshd[1743]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:38.476948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:03:38.477677 systemd[1]: sshd@7-10.0.0.70:22-10.0.0.1:42762.service: Deactivated successfully. Sep 13 00:03:38.481646 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:03:38.485715 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:03:38.488026 systemd[1]: Started sshd@8-10.0.0.70:22-10.0.0.1:42776.service - OpenSSH per-connection server daemon (10.0.0.1:42776). Sep 13 00:03:38.489086 systemd-logind[1563]: Removed session 8. Sep 13 00:03:38.529975 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 42776 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:03:38.532032 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:38.536849 systemd-logind[1563]: New session 9 of user core. Sep 13 00:03:38.549007 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:03:38.607073 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:03:38.607567 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:03:38.710183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:03:38.715114 (kubelet)[1809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:03:38.908783 kubelet[1809]: E0913 00:03:38.908659 1809 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:03:38.916332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:03:38.916698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:03:39.137040 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:03:39.138675 (dockerd)[1828]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:03:39.811008 dockerd[1828]: time="2025-09-13T00:03:39.810918377Z" level=info msg="Starting up" Sep 13 00:03:40.248985 dockerd[1828]: time="2025-09-13T00:03:40.248890693Z" level=info msg="Loading containers: start." Sep 13 00:03:40.440633 kernel: Initializing XFRM netlink socket Sep 13 00:03:40.545176 systemd-networkd[1250]: docker0: Link UP Sep 13 00:03:40.570576 dockerd[1828]: time="2025-09-13T00:03:40.570522853Z" level=info msg="Loading containers: done." Sep 13 00:03:40.591898 dockerd[1828]: time="2025-09-13T00:03:40.591838777Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:03:40.592069 dockerd[1828]: time="2025-09-13T00:03:40.591985374Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:03:40.592165 dockerd[1828]: time="2025-09-13T00:03:40.592129958Z" level=info msg="Daemon has completed initialization" Sep 13 00:03:40.636391 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:03:40.639111 dockerd[1828]: time="2025-09-13T00:03:40.638994689Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:03:41.650003 containerd[1583]: time="2025-09-13T00:03:41.649923788Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:03:42.401864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount407888934.mount: Deactivated successfully. Sep 13 00:03:44.588116 containerd[1583]: time="2025-09-13T00:03:44.588025116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:44.590636 containerd[1583]: time="2025-09-13T00:03:44.590542797Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 13 00:03:44.592530 containerd[1583]: time="2025-09-13T00:03:44.592494212Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:44.596033 containerd[1583]: time="2025-09-13T00:03:44.595998698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:44.597347 containerd[1583]: time="2025-09-13T00:03:44.597282806Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.947274804s" Sep 13 00:03:44.597469 containerd[1583]: time="2025-09-13T00:03:44.597358785Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:03:44.598305 containerd[1583]: time="2025-09-13T00:03:44.598259348Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:03:45.980313 containerd[1583]: time="2025-09-13T00:03:45.980237241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:45.981879 containerd[1583]: time="2025-09-13T00:03:45.981783335Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 13 00:03:45.982929 containerd[1583]: time="2025-09-13T00:03:45.982889281Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:45.987156 containerd[1583]: time="2025-09-13T00:03:45.987117651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:45.988828 containerd[1583]: time="2025-09-13T00:03:45.988789382Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.390493598s" Sep 13 00:03:45.988873 containerd[1583]: time="2025-09-13T00:03:45.988835184Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:03:45.989555 containerd[1583]: time="2025-09-13T00:03:45.989500409Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:03:47.440860 containerd[1583]: time="2025-09-13T00:03:47.440796996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:47.441692 containerd[1583]: time="2025-09-13T00:03:47.441646515Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 13 00:03:47.442991 containerd[1583]: time="2025-09-13T00:03:47.442933997Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:47.446155 containerd[1583]: time="2025-09-13T00:03:47.446106006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:47.447730 containerd[1583]: time="2025-09-13T00:03:47.447686954Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.458143576s" Sep 13 00:03:47.447781 containerd[1583]: time="2025-09-13T00:03:47.447731408Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:03:47.448343 containerd[1583]: time="2025-09-13T00:03:47.448317796Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:03:49.120324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:03:49.137015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:03:49.625376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:03:49.638545 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:03:49.734378 kubelet[2056]: E0913 00:03:49.733969 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:03:49.740837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:03:49.741322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:03:50.056875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3440635867.mount: Deactivated successfully. Sep 13 00:03:52.711022 containerd[1583]: time="2025-09-13T00:03:52.710920856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:52.721707 containerd[1583]: time="2025-09-13T00:03:52.721655311Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 13 00:03:52.750409 containerd[1583]: time="2025-09-13T00:03:52.750363642Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:52.777449 containerd[1583]: time="2025-09-13T00:03:52.777270390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:52.778281 containerd[1583]: time="2025-09-13T00:03:52.778247369Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 5.329892012s" Sep 13 00:03:52.778340 containerd[1583]: time="2025-09-13T00:03:52.778293288Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:03:52.779089 containerd[1583]: time="2025-09-13T00:03:52.779034902Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:03:54.170580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906592586.mount: Deactivated successfully. Sep 13 00:03:58.248070 containerd[1583]: time="2025-09-13T00:03:58.247958127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:58.285541 containerd[1583]: time="2025-09-13T00:03:58.285443055Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:03:58.348297 containerd[1583]: time="2025-09-13T00:03:58.348197316Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:58.393155 containerd[1583]: time="2025-09-13T00:03:58.393051783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:58.394811 containerd[1583]: time="2025-09-13T00:03:58.394726112Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 5.615658586s" Sep 13 00:03:58.394893 containerd[1583]: time="2025-09-13T00:03:58.394814417Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:03:58.395879 containerd[1583]: time="2025-09-13T00:03:58.395836336Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:03:59.870246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:03:59.879763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:00.068762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:00.073719 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:04:00.797687 kubelet[2135]: E0913 00:04:00.797618 2135 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:04:00.802927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:04:00.803289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:04:02.169496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444950704.mount: Deactivated successfully. Sep 13 00:04:02.425720 containerd[1583]: time="2025-09-13T00:04:02.425472196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:02.454833 containerd[1583]: time="2025-09-13T00:04:02.454745430Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:04:02.468504 containerd[1583]: time="2025-09-13T00:04:02.468429754Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:02.511520 containerd[1583]: time="2025-09-13T00:04:02.511418637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:02.512293 containerd[1583]: time="2025-09-13T00:04:02.512235566Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 4.116346958s" Sep 13 00:04:02.512293 containerd[1583]: time="2025-09-13T00:04:02.512289718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:04:02.512988 containerd[1583]: time="2025-09-13T00:04:02.512947522Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:04:06.942906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592162163.mount: Deactivated successfully. Sep 13 00:04:10.870208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:04:10.883982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:11.051644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:11.058517 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:04:11.756291 update_engine[1565]: I20250913 00:04:11.756138 1565 update_attempter.cc:509] Updating boot flags... Sep 13 00:04:11.946721 kubelet[2169]: E0913 00:04:11.946626 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:04:11.951216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:04:11.951654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:04:12.622706 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2184) Sep 13 00:04:12.653638 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2183) Sep 13 00:04:12.691652 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2183) Sep 13 00:04:22.119952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 00:04:22.129905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:22.318290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:22.327007 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:04:22.447445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:04:22.480515 kubelet[2237]: E0913 00:04:22.443533 2237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:04:22.447829 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:04:25.778085 containerd[1583]: time="2025-09-13T00:04:25.777986117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:25.798011 containerd[1583]: time="2025-09-13T00:04:25.797902604Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 13 00:04:25.829793 containerd[1583]: time="2025-09-13T00:04:25.829727374Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:25.873058 containerd[1583]: time="2025-09-13T00:04:25.872986261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:25.874657 containerd[1583]: time="2025-09-13T00:04:25.874615936Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 23.361616869s" Sep 13 00:04:25.874657 containerd[1583]: time="2025-09-13T00:04:25.874652762Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:04:27.706919 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:27.720960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:27.747015 systemd[1]: Reloading requested from client PID 2290 ('systemctl') (unit session-9.scope)... Sep 13 00:04:27.747038 systemd[1]: Reloading... Sep 13 00:04:27.827643 zram_generator::config[2329]: No configuration found. Sep 13 00:04:28.253328 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:28.338447 systemd[1]: Reloading finished in 590 ms. Sep 13 00:04:28.394123 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:04:28.394231 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:04:28.394662 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:28.397751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:28.579684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:28.585797 (kubelet)[2390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:04:28.647667 kubelet[2390]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:28.647667 kubelet[2390]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:04:28.647667 kubelet[2390]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:28.647667 kubelet[2390]: I0913 00:04:28.647513 2390 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:04:28.844643 kubelet[2390]: I0913 00:04:28.844416 2390 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:04:28.844643 kubelet[2390]: I0913 00:04:28.844464 2390 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:04:28.844968 kubelet[2390]: I0913 00:04:28.844928 2390 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:04:28.914721 kubelet[2390]: E0913 00:04:28.914658 2390 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:28.915587 kubelet[2390]: I0913 00:04:28.915555 2390 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:04:28.923290 kubelet[2390]: E0913 00:04:28.923234 2390 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:04:28.923290 kubelet[2390]: I0913 00:04:28.923287 2390 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:04:28.933980 kubelet[2390]: I0913 00:04:28.933927 2390 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:04:28.934306 kubelet[2390]: I0913 00:04:28.934272 2390 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:04:28.934492 kubelet[2390]: I0913 00:04:28.934438 2390 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:04:28.934715 kubelet[2390]: I0913 00:04:28.934477 2390 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:04:28.934864 kubelet[2390]: I0913 00:04:28.934727 2390 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:04:28.934864 kubelet[2390]: I0913 00:04:28.934769 2390 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:04:28.934966 kubelet[2390]: I0913 00:04:28.934944 2390 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:28.941110 kubelet[2390]: I0913 00:04:28.940845 2390 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:04:28.941110 kubelet[2390]: I0913 00:04:28.941127 2390 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:04:28.941378 kubelet[2390]: I0913 00:04:28.941197 2390 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:04:28.941378 kubelet[2390]: I0913 00:04:28.941235 2390 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:04:28.942509 kubelet[2390]: W0913 00:04:28.942337 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:28.945653 kubelet[2390]: E0913 00:04:28.944820 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:28.945653 kubelet[2390]: W0913 00:04:28.944997 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:28.945653 kubelet[2390]: E0913 00:04:28.945051 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:28.950679 kubelet[2390]: I0913 00:04:28.950635 2390 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:04:28.951343 kubelet[2390]: I0913 00:04:28.951298 2390 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:04:28.951445 kubelet[2390]: W0913 00:04:28.951420 2390 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:04:28.957588 kubelet[2390]: I0913 00:04:28.957553 2390 server.go:1274] "Started kubelet" Sep 13 00:04:28.957588 kubelet[2390]: I0913 00:04:28.957678 2390 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:04:28.957588 kubelet[2390]: I0913 00:04:28.957675 2390 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:04:28.957588 kubelet[2390]: I0913 00:04:28.958178 2390 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:04:28.958994 kubelet[2390]: I0913 00:04:28.958956 2390 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:04:28.959367 kubelet[2390]: I0913 00:04:28.959328 2390 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:04:28.962422 kubelet[2390]: I0913 00:04:28.962370 2390 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:04:28.968414 kubelet[2390]: E0913 00:04:28.968090 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:28.968767 kubelet[2390]: I0913 00:04:28.968742 2390 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:04:28.969786 kubelet[2390]: I0913 00:04:28.969020 2390 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:04:28.969786 kubelet[2390]: I0913 00:04:28.969100 2390 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:04:28.969786 kubelet[2390]: W0913 00:04:28.969588 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:28.969786 kubelet[2390]: E0913 00:04:28.969705 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:28.971460 kubelet[2390]: E0913 00:04:28.970322 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="200ms" Sep 13 00:04:28.972964 kubelet[2390]: E0913 00:04:28.972931 2390 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:04:28.973347 kubelet[2390]: I0913 00:04:28.973331 2390 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:04:28.973347 kubelet[2390]: I0913 00:04:28.973347 2390 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:04:28.973456 kubelet[2390]: I0913 00:04:28.973438 2390 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:04:28.995405 kubelet[2390]: I0913 00:04:28.995320 2390 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:04:28.998525 kubelet[2390]: E0913 00:04:28.996310 2390 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864aeba1d25f6e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:04:28.957513443 +0000 UTC m=+0.364061263,LastTimestamp:2025-09-13 00:04:28.957513443 +0000 UTC m=+0.364061263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:04:28.998832 kubelet[2390]: I0913 00:04:28.998762 2390 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:04:28.998941 kubelet[2390]: I0913 00:04:28.998841 2390 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:04:28.999010 kubelet[2390]: I0913 00:04:28.998987 2390 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:04:28.999090 kubelet[2390]: E0913 00:04:28.999063 2390 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:04:29.001005 kubelet[2390]: W0913 00:04:29.000559 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:29.001005 kubelet[2390]: E0913 00:04:29.000696 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:29.023318 kubelet[2390]: I0913 00:04:29.023258 2390 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:04:29.023318 kubelet[2390]: I0913 00:04:29.023283 2390 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:04:29.023318 kubelet[2390]: I0913 00:04:29.023316 2390 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:29.068609 kubelet[2390]: E0913 00:04:29.068523 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:29.100221 kubelet[2390]: E0913 00:04:29.099999 2390 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:04:29.169458 kubelet[2390]: E0913 00:04:29.169374 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:29.170962 kubelet[2390]: E0913 00:04:29.170920 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="400ms" Sep 13 00:04:29.270297 kubelet[2390]: E0913 00:04:29.270201 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:29.300625 kubelet[2390]: E0913 00:04:29.300541 2390 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:04:29.371087 kubelet[2390]: E0913 00:04:29.371016 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:29.471500 kubelet[2390]: E0913 00:04:29.471388 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:29.572165 kubelet[2390]: E0913 00:04:29.572037 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:29.572668 kubelet[2390]: E0913 00:04:29.572570 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="800ms" Sep 13 00:04:29.673256 kubelet[2390]: E0913 00:04:29.673055 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:29.701319 kubelet[2390]: E0913 00:04:29.701230 2390 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:04:29.773871 kubelet[2390]: E0913 00:04:29.773782 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:29.874323 kubelet[2390]: E0913 00:04:29.874249 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:29.975553 kubelet[2390]: E0913 00:04:29.975382 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:30.013049 kubelet[2390]: W0913 00:04:30.012923 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:30.013049 kubelet[2390]: E0913 00:04:30.013050 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:30.039845 kubelet[2390]: I0913 00:04:30.039786 2390 policy_none.go:49] "None policy: Start" Sep 13 00:04:30.040931 kubelet[2390]: I0913 00:04:30.040892 2390 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:04:30.040931 kubelet[2390]: I0913 00:04:30.040928 2390 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:04:30.076444 kubelet[2390]: E0913 00:04:30.076381 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:30.116844 kubelet[2390]: I0913 00:04:30.116789 2390 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:04:30.117666 kubelet[2390]: I0913 00:04:30.117070 2390 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:04:30.117666 kubelet[2390]: I0913 00:04:30.117089 2390 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:04:30.117666 kubelet[2390]: I0913 00:04:30.117573 2390 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:04:30.119116 kubelet[2390]: E0913 00:04:30.119081 2390 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:04:30.134069 kubelet[2390]: W0913 00:04:30.133974 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:30.134069 kubelet[2390]: E0913 00:04:30.134067 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:30.219747 kubelet[2390]: I0913 00:04:30.219696 2390 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:04:30.220190 kubelet[2390]: E0913 00:04:30.220140 2390 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Sep 13 00:04:30.373447 kubelet[2390]: E0913 00:04:30.373385 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="1.6s" Sep 13 00:04:30.397326 kubelet[2390]: W0913 00:04:30.396757 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:30.397326 kubelet[2390]: E0913 00:04:30.397109 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:30.420744 kubelet[2390]: W0913 00:04:30.420651 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:30.420744 kubelet[2390]: E0913 00:04:30.420745 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:30.421956 kubelet[2390]: I0913 00:04:30.421892 2390 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:04:30.422391 kubelet[2390]: E0913 00:04:30.422347 2390 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Sep 13 00:04:30.579101 kubelet[2390]: I0913 00:04:30.578988 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:30.579101 kubelet[2390]: I0913 00:04:30.579081 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:30.579274 kubelet[2390]: I0913 00:04:30.579127 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:04:30.579274 kubelet[2390]: I0913 00:04:30.579176 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9d1c729376b8594ce1038c4fa7bff35-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9d1c729376b8594ce1038c4fa7bff35\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:04:30.579274 kubelet[2390]: I0913 00:04:30.579199 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9d1c729376b8594ce1038c4fa7bff35-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a9d1c729376b8594ce1038c4fa7bff35\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:04:30.579274 kubelet[2390]: I0913 00:04:30.579224 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:30.579274 kubelet[2390]: I0913 00:04:30.579265 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9d1c729376b8594ce1038c4fa7bff35-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9d1c729376b8594ce1038c4fa7bff35\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:04:30.579470 kubelet[2390]: I0913 00:04:30.579287 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:30.579470 kubelet[2390]: I0913 00:04:30.579319 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:30.808915 kubelet[2390]: E0913 00:04:30.808707 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:30.810403 containerd[1583]: time="2025-09-13T00:04:30.810350147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a9d1c729376b8594ce1038c4fa7bff35,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:30.811698 kubelet[2390]: E0913 00:04:30.811640 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:30.812208 containerd[1583]: time="2025-09-13T00:04:30.812162695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:30.812457 kubelet[2390]: E0913 00:04:30.812303 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:30.812791 containerd[1583]: time="2025-09-13T00:04:30.812649032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:30.824687 kubelet[2390]: I0913 00:04:30.824637 2390 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:04:30.825179 kubelet[2390]: E0913 00:04:30.825022 2390 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Sep 13 00:04:30.972397 kubelet[2390]: E0913 00:04:30.972326 2390 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:31.626848 kubelet[2390]: I0913 00:04:31.626799 2390 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:04:31.627247 kubelet[2390]: E0913 00:04:31.627221 2390 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Sep 13 00:04:31.974570 kubelet[2390]: E0913 00:04:31.974420 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="3.2s" Sep 13 00:04:32.124218 kubelet[2390]: W0913 00:04:32.124172 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:32.124218 kubelet[2390]: E0913 00:04:32.124226 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:32.491161 kubelet[2390]: W0913 00:04:32.491107 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:32.491161 kubelet[2390]: E0913 00:04:32.491164 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:32.551089 kubelet[2390]: E0913 00:04:32.550958 2390 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864aeba1d25f6e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:04:28.957513443 +0000 UTC m=+0.364061263,LastTimestamp:2025-09-13 00:04:28.957513443 +0000 UTC m=+0.364061263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:04:32.580572 kubelet[2390]: W0913 00:04:32.580531 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:32.580678 kubelet[2390]: E0913 00:04:32.580570 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:32.781466 kubelet[2390]: W0913 00:04:32.781298 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:32.781466 kubelet[2390]: E0913 00:04:32.781366 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:33.229475 kubelet[2390]: I0913 00:04:33.229422 2390 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:04:33.230032 kubelet[2390]: E0913 00:04:33.229874 2390 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Sep 13 00:04:35.175510 kubelet[2390]: E0913 00:04:35.175430 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="6.4s" Sep 13 00:04:35.226913 kubelet[2390]: E0913 00:04:35.226847 2390 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:36.066038 kubelet[2390]: W0913 00:04:36.065964 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:36.066038 kubelet[2390]: E0913 00:04:36.066028 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:36.187022 kubelet[2390]: W0913 00:04:36.186941 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:36.187022 kubelet[2390]: E0913 00:04:36.186992 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:36.431967 kubelet[2390]: I0913 00:04:36.431903 2390 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:04:36.432437 kubelet[2390]: E0913 00:04:36.432396 2390 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Sep 13 00:04:36.465539 kubelet[2390]: W0913 00:04:36.465480 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:36.465659 kubelet[2390]: E0913 00:04:36.465557 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:36.501120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141617421.mount: Deactivated successfully. Sep 13 00:04:36.835814 kubelet[2390]: W0913 00:04:36.835651 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 13 00:04:36.835814 kubelet[2390]: E0913 00:04:36.835712 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:36.947595 containerd[1583]: time="2025-09-13T00:04:36.947507888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:37.035748 containerd[1583]: time="2025-09-13T00:04:37.035647648Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:37.079346 containerd[1583]: time="2025-09-13T00:04:37.079258488Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:04:37.127876 containerd[1583]: time="2025-09-13T00:04:37.127818900Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:37.215234 containerd[1583]: time="2025-09-13T00:04:37.215117866Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:04:37.308149 containerd[1583]: time="2025-09-13T00:04:37.308037006Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:37.415631 containerd[1583]: time="2025-09-13T00:04:37.415376562Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:04:37.577109 containerd[1583]: time="2025-09-13T00:04:37.577021102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:37.578342 containerd[1583]: time="2025-09-13T00:04:37.578271283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.765544715s" Sep 13 00:04:37.579747 containerd[1583]: time="2025-09-13T00:04:37.579700693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.767471513s" Sep 13 00:04:37.679977 containerd[1583]: time="2025-09-13T00:04:37.679784078Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.869320261s" Sep 13 00:04:39.400101 containerd[1583]: time="2025-09-13T00:04:39.399991981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:39.400101 containerd[1583]: time="2025-09-13T00:04:39.400108333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:39.400662 containerd[1583]: time="2025-09-13T00:04:39.400149946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:39.400662 containerd[1583]: time="2025-09-13T00:04:39.400413241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:39.408863 containerd[1583]: time="2025-09-13T00:04:39.408543961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:39.408863 containerd[1583]: time="2025-09-13T00:04:39.408638430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:39.408863 containerd[1583]: time="2025-09-13T00:04:39.408650224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:39.408863 containerd[1583]: time="2025-09-13T00:04:39.408758229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:39.411321 containerd[1583]: time="2025-09-13T00:04:39.409700248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:39.411321 containerd[1583]: time="2025-09-13T00:04:39.409876089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:39.411321 containerd[1583]: time="2025-09-13T00:04:39.409897732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:39.411321 containerd[1583]: time="2025-09-13T00:04:39.410106478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:39.528922 containerd[1583]: time="2025-09-13T00:04:39.528862658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a9d1c729376b8594ce1038c4fa7bff35,Namespace:kube-system,Attempt:0,} returns sandbox id \"827d263cde5bb2188de288c35bd9368e0e60d536b28bca76c3ea10a68c918bb8\"" Sep 13 00:04:39.530838 kubelet[2390]: E0913 00:04:39.530800 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:39.534843 containerd[1583]: time="2025-09-13T00:04:39.534046740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca3cd5db9be2d6a824a0fb16950d9dace463c86326ad7e0761ca20359f124992\"" Sep 13 00:04:39.535456 containerd[1583]: time="2025-09-13T00:04:39.535407745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a80825ad410030f822ed8a1c0d0ee473934adb528007bc998590010dca9b541f\"" Sep 13 00:04:39.536730 kubelet[2390]: E0913 00:04:39.536683 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:39.536868 kubelet[2390]: E0913 00:04:39.536846 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:39.539712 containerd[1583]: time="2025-09-13T00:04:39.539676650Z" level=info msg="CreateContainer within sandbox \"a80825ad410030f822ed8a1c0d0ee473934adb528007bc998590010dca9b541f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:04:39.539778 containerd[1583]: time="2025-09-13T00:04:39.539718855Z" level=info msg="CreateContainer within sandbox \"ca3cd5db9be2d6a824a0fb16950d9dace463c86326ad7e0761ca20359f124992\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:04:39.539778 containerd[1583]: time="2025-09-13T00:04:39.539684967Z" level=info msg="CreateContainer within sandbox \"827d263cde5bb2188de288c35bd9368e0e60d536b28bca76c3ea10a68c918bb8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:04:40.119270 kubelet[2390]: E0913 00:04:40.119204 2390 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:04:41.385958 containerd[1583]: time="2025-09-13T00:04:41.385868978Z" level=info msg="CreateContainer within sandbox \"ca3cd5db9be2d6a824a0fb16950d9dace463c86326ad7e0761ca20359f124992\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d295cbad456a287150b7d89b68f5f551be7c79da42e36c6e4d39786dcd13f506\"" Sep 13 00:04:41.386823 containerd[1583]: time="2025-09-13T00:04:41.386790402Z" level=info msg="StartContainer for \"d295cbad456a287150b7d89b68f5f551be7c79da42e36c6e4d39786dcd13f506\"" Sep 13 00:04:41.576960 kubelet[2390]: E0913 00:04:41.576830 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="7s" Sep 13 00:04:41.713420 containerd[1583]: time="2025-09-13T00:04:41.713247167Z" level=info msg="StartContainer for \"d295cbad456a287150b7d89b68f5f551be7c79da42e36c6e4d39786dcd13f506\" returns successfully" Sep 13 00:04:41.939997 containerd[1583]: time="2025-09-13T00:04:41.939928710Z" level=info msg="CreateContainer within sandbox \"827d263cde5bb2188de288c35bd9368e0e60d536b28bca76c3ea10a68c918bb8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a2a7d32a8cc494f13286e629ee5e22a19d651227225eae3468c85dc7b0f0cb6e\"" Sep 13 00:04:41.940700 containerd[1583]: time="2025-09-13T00:04:41.940670636Z" level=info msg="StartContainer for \"a2a7d32a8cc494f13286e629ee5e22a19d651227225eae3468c85dc7b0f0cb6e\"" Sep 13 00:04:42.268686 containerd[1583]: time="2025-09-13T00:04:42.268284997Z" level=info msg="StartContainer for \"a2a7d32a8cc494f13286e629ee5e22a19d651227225eae3468c85dc7b0f0cb6e\" returns successfully" Sep 13 00:04:42.268686 containerd[1583]: time="2025-09-13T00:04:42.268316300Z" level=info msg="CreateContainer within sandbox \"a80825ad410030f822ed8a1c0d0ee473934adb528007bc998590010dca9b541f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b14577e6d58744cb743108daa82b92d8c5655efe38d32692558fd1be0c3e456c\"" Sep 13 00:04:42.269122 containerd[1583]: time="2025-09-13T00:04:42.269075999Z" level=info msg="StartContainer for \"b14577e6d58744cb743108daa82b92d8c5655efe38d32692558fd1be0c3e456c\"" Sep 13 00:04:42.281590 kubelet[2390]: E0913 00:04:42.281539 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:42.436039 containerd[1583]: time="2025-09-13T00:04:42.435880460Z" level=info msg="StartContainer for \"b14577e6d58744cb743108daa82b92d8c5655efe38d32692558fd1be0c3e456c\" returns successfully" Sep 13 00:04:42.835429 kubelet[2390]: I0913 00:04:42.835378 2390 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:04:43.284326 kubelet[2390]: E0913 00:04:43.284193 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:43.284326 kubelet[2390]: E0913 00:04:43.284245 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:43.284326 kubelet[2390]: E0913 00:04:43.284331 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:43.294020 kubelet[2390]: E0913 00:04:43.293917 2390 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1864aeba1d25f6e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:04:28.957513443 +0000 UTC m=+0.364061263,LastTimestamp:2025-09-13 00:04:28.957513443 +0000 UTC m=+0.364061263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:04:43.450005 kubelet[2390]: E0913 00:04:43.449878 2390 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1864aeba1d42108b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:04:28.959355019 +0000 UTC m=+0.365902839,LastTimestamp:2025-09-13 00:04:28.959355019 +0000 UTC m=+0.365902839,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:04:43.478782 kubelet[2390]: I0913 00:04:43.478724 2390 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:04:43.478782 kubelet[2390]: E0913 00:04:43.478773 2390 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:04:43.799948 kubelet[2390]: E0913 00:04:43.799700 2390 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1864aeba1e1107ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:04:28.972918714 +0000 UTC m=+0.379466534,LastTimestamp:2025-09-13 00:04:28.972918714 +0000 UTC m=+0.379466534,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:04:44.286102 kubelet[2390]: E0913 00:04:44.286051 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:44.286102 kubelet[2390]: E0913 00:04:44.286100 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:44.286704 kubelet[2390]: E0913 00:04:44.286186 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:44.318919 kubelet[2390]: E0913 00:04:44.318855 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:44.319095 kubelet[2390]: E0913 00:04:44.318929 2390 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1864aeba2104091a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:04:29.022398746 +0000 UTC m=+0.428946566,LastTimestamp:2025-09-13 00:04:29.022398746 +0000 UTC m=+0.428946566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:04:44.419775 kubelet[2390]: E0913 00:04:44.419701 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:44.520658 kubelet[2390]: E0913 00:04:44.520576 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:44.621798 kubelet[2390]: E0913 00:04:44.621753 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:44.722169 kubelet[2390]: E0913 00:04:44.722104 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:44.823090 kubelet[2390]: E0913 00:04:44.823011 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:44.924013 kubelet[2390]: E0913 00:04:44.923828 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:45.024582 kubelet[2390]: E0913 00:04:45.024487 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:45.124979 kubelet[2390]: E0913 00:04:45.124920 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:45.225823 kubelet[2390]: E0913 00:04:45.225668 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:45.287627 kubelet[2390]: E0913 00:04:45.287550 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:45.326542 kubelet[2390]: E0913 00:04:45.326468 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:45.427482 kubelet[2390]: E0913 00:04:45.427375 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:45.528699 kubelet[2390]: E0913 00:04:45.528492 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:45.629107 kubelet[2390]: E0913 00:04:45.629034 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:45.729688 kubelet[2390]: E0913 00:04:45.729570 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:45.830665 kubelet[2390]: E0913 00:04:45.830492 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:45.930935 kubelet[2390]: E0913 00:04:45.930858 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:46.031353 kubelet[2390]: E0913 00:04:46.031271 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:46.132261 kubelet[2390]: E0913 00:04:46.132206 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:46.232914 kubelet[2390]: E0913 00:04:46.232864 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:46.333492 kubelet[2390]: E0913 00:04:46.333429 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:46.434440 kubelet[2390]: E0913 00:04:46.434276 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:46.534505 kubelet[2390]: E0913 00:04:46.534450 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:46.635366 kubelet[2390]: E0913 00:04:46.635292 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:46.736143 kubelet[2390]: E0913 00:04:46.735944 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:46.836784 kubelet[2390]: E0913 00:04:46.836699 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:46.937588 kubelet[2390]: E0913 00:04:46.937517 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:46.940231 kubelet[2390]: E0913 00:04:46.940185 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:47.038695 kubelet[2390]: E0913 00:04:47.038515 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:47.464313 kubelet[2390]: E0913 00:04:47.464264 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:47.957732 kubelet[2390]: I0913 00:04:47.957670 2390 apiserver.go:52] "Watching apiserver" Sep 13 00:04:47.969666 kubelet[2390]: I0913 00:04:47.969612 2390 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:04:48.292014 kubelet[2390]: E0913 00:04:48.291847 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:49.550134 kubelet[2390]: I0913 00:04:49.550044 2390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.549996104 podStartE2EDuration="2.549996104s" podCreationTimestamp="2025-09-13 00:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:49.549967889 +0000 UTC m=+20.956515709" watchObservedRunningTime="2025-09-13 00:04:49.549996104 +0000 UTC m=+20.956543925" Sep 13 00:04:54.249170 kubelet[2390]: E0913 00:04:54.249009 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:54.302468 kubelet[2390]: E0913 00:04:54.302399 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:57.384259 kubelet[2390]: E0913 00:04:57.384197 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:57.706988 kubelet[2390]: I0913 00:04:57.706789 2390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.706771127 podStartE2EDuration="4.706771127s" podCreationTimestamp="2025-09-13 00:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:57.706734826 +0000 UTC m=+29.113282656" watchObservedRunningTime="2025-09-13 00:04:57.706771127 +0000 UTC m=+29.113318947" Sep 13 00:05:01.471296 systemd[1]: Reloading requested from client PID 2671 ('systemctl') (unit session-9.scope)... Sep 13 00:05:01.471313 systemd[1]: Reloading... Sep 13 00:05:01.634643 zram_generator::config[2711]: No configuration found. Sep 13 00:05:01.757223 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:01.854403 systemd[1]: Reloading finished in 382 ms. Sep 13 00:05:01.897771 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:01.907588 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:05:01.908110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:01.917863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:02.137886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:02.142980 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:05:02.184327 kubelet[2765]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:02.184327 kubelet[2765]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:05:02.184327 kubelet[2765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:02.184327 kubelet[2765]: I0913 00:05:02.183953 2765 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:05:02.192612 kubelet[2765]: I0913 00:05:02.192551 2765 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:05:02.192612 kubelet[2765]: I0913 00:05:02.192578 2765 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:05:02.192889 kubelet[2765]: I0913 00:05:02.192868 2765 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:05:02.194108 kubelet[2765]: I0913 00:05:02.194086 2765 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:05:02.195918 kubelet[2765]: I0913 00:05:02.195855 2765 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:05:02.202489 kubelet[2765]: E0913 00:05:02.202443 2765 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:05:02.202489 kubelet[2765]: I0913 00:05:02.202477 2765 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:05:02.208779 kubelet[2765]: I0913 00:05:02.208745 2765 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:05:02.209245 kubelet[2765]: I0913 00:05:02.209207 2765 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:05:02.209414 kubelet[2765]: I0913 00:05:02.209356 2765 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:05:02.209555 kubelet[2765]: I0913 00:05:02.209396 2765 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:05:02.209555 kubelet[2765]: I0913 00:05:02.209551 2765 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:05:02.209555 kubelet[2765]: I0913 00:05:02.209560 2765 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:05:02.209793 kubelet[2765]: I0913 00:05:02.209587 2765 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:02.209793 kubelet[2765]: I0913 00:05:02.209729 2765 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:05:02.209793 kubelet[2765]: I0913 00:05:02.209742 2765 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:05:02.209793 kubelet[2765]: I0913 00:05:02.209779 2765 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:05:02.209793 kubelet[2765]: I0913 00:05:02.209793 2765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:05:02.211588 kubelet[2765]: I0913 00:05:02.210810 2765 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:05:02.211588 kubelet[2765]: I0913 00:05:02.211168 2765 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:05:02.213391 kubelet[2765]: I0913 00:05:02.211939 2765 server.go:1274] "Started kubelet" Sep 13 00:05:02.213391 kubelet[2765]: I0913 00:05:02.212763 2765 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:05:02.213391 kubelet[2765]: I0913 00:05:02.213309 2765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:05:02.217048 kubelet[2765]: I0913 00:05:02.216795 2765 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:05:02.217048 kubelet[2765]: I0913 00:05:02.216887 2765 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:05:02.218352 kubelet[2765]: I0913 00:05:02.218323 2765 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:05:02.223398 kubelet[2765]: I0913 00:05:02.223343 2765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:05:02.224241 kubelet[2765]: I0913 00:05:02.224190 2765 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:05:02.224447 kubelet[2765]: I0913 00:05:02.224414 2765 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:05:02.226828 kubelet[2765]: I0913 00:05:02.226794 2765 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:05:02.227943 kubelet[2765]: I0913 00:05:02.227615 2765 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:05:02.227943 kubelet[2765]: I0913 00:05:02.227779 2765 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:05:02.229731 kubelet[2765]: E0913 00:05:02.229704 2765 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:05:02.231656 kubelet[2765]: I0913 00:05:02.231139 2765 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:05:02.232486 kubelet[2765]: I0913 00:05:02.232449 2765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:05:02.233792 kubelet[2765]: I0913 00:05:02.233769 2765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:05:02.233792 kubelet[2765]: I0913 00:05:02.233792 2765 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:05:02.233862 kubelet[2765]: I0913 00:05:02.233812 2765 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:05:02.233888 kubelet[2765]: E0913 00:05:02.233862 2765 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:05:02.280467 kubelet[2765]: I0913 00:05:02.280433 2765 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:05:02.280467 kubelet[2765]: I0913 00:05:02.280459 2765 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:05:02.280467 kubelet[2765]: I0913 00:05:02.280482 2765 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:02.280705 kubelet[2765]: I0913 00:05:02.280694 2765 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:05:02.280728 kubelet[2765]: I0913 00:05:02.280707 2765 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:05:02.280749 kubelet[2765]: I0913 00:05:02.280727 2765 policy_none.go:49] "None policy: Start" Sep 13 00:05:02.281310 kubelet[2765]: I0913 00:05:02.281282 2765 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:05:02.281358 kubelet[2765]: I0913 00:05:02.281326 2765 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:05:02.281570 kubelet[2765]: I0913 00:05:02.281543 2765 state_mem.go:75] "Updated machine memory state" Sep 13 00:05:02.283900 kubelet[2765]: I0913 00:05:02.283248 2765 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:05:02.283900 kubelet[2765]: I0913 00:05:02.283468 2765 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:05:02.283900 kubelet[2765]: I0913 00:05:02.283482 2765 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:05:02.283900 kubelet[2765]: I0913 00:05:02.283773 2765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:05:02.389342 kubelet[2765]: I0913 00:05:02.389302 2765 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:05:02.428624 kubelet[2765]: I0913 00:05:02.428392 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9d1c729376b8594ce1038c4fa7bff35-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9d1c729376b8594ce1038c4fa7bff35\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:05:02.428624 kubelet[2765]: I0913 00:05:02.428433 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9d1c729376b8594ce1038c4fa7bff35-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a9d1c729376b8594ce1038c4fa7bff35\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:05:02.428624 kubelet[2765]: I0913 00:05:02.428473 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:05:02.428624 kubelet[2765]: I0913 00:05:02.428512 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:05:02.428624 kubelet[2765]: I0913 00:05:02.428541 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:05:02.428941 kubelet[2765]: I0913 00:05:02.428560 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:05:02.428941 kubelet[2765]: I0913 00:05:02.428576 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9d1c729376b8594ce1038c4fa7bff35-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9d1c729376b8594ce1038c4fa7bff35\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:05:02.428941 kubelet[2765]: I0913 00:05:02.428615 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:05:02.428941 kubelet[2765]: I0913 00:05:02.428651 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:05:02.492263 kubelet[2765]: E0913 00:05:02.492210 2765 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:05:02.492442 kubelet[2765]: E0913 00:05:02.492428 2765 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:05:02.709112 kubelet[2765]: I0913 00:05:02.708964 2765 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:05:02.709112 kubelet[2765]: I0913 00:05:02.709068 2765 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:05:02.719711 kubelet[2765]: E0913 00:05:02.719675 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:02.792853 kubelet[2765]: E0913 00:05:02.792815 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:02.793027 kubelet[2765]: E0913 00:05:02.792815 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:03.211014 kubelet[2765]: I0913 00:05:03.210949 2765 apiserver.go:52] "Watching apiserver" Sep 13 00:05:03.224985 kubelet[2765]: I0913 00:05:03.224916 2765 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:05:03.252518 kubelet[2765]: E0913 00:05:03.252410 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:03.252518 kubelet[2765]: E0913 00:05:03.252731 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:03.580460 kubelet[2765]: E0913 00:05:03.580304 2765 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:05:03.580617 kubelet[2765]: E0913 00:05:03.580511 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:03.897244 sudo[2799]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:05:03.897749 sudo[2799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 00:05:04.215157 kubelet[2765]: I0913 00:05:04.214771 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.214750767 podStartE2EDuration="2.214750767s" podCreationTimestamp="2025-09-13 00:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:04.214559414 +0000 UTC m=+2.067460300" watchObservedRunningTime="2025-09-13 00:05:04.214750767 +0000 UTC m=+2.067651644" Sep 13 00:05:04.253541 kubelet[2765]: E0913 00:05:04.253489 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:04.253926 kubelet[2765]: E0913 00:05:04.253906 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:04.403098 sudo[2799]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:05.255203 kubelet[2765]: E0913 00:05:05.255153 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:06.782512 kubelet[2765]: I0913 00:05:06.782307 2765 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:05:06.785039 containerd[1583]: time="2025-09-13T00:05:06.784983613Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:05:06.785467 kubelet[2765]: I0913 00:05:06.785434 2765 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:05:07.617778 kubelet[2765]: E0913 00:05:07.617722 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:08.258761 kubelet[2765]: E0913 00:05:08.258726 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:08.471910 kubelet[2765]: I0913 00:05:08.471853 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8d011cc-6813-4c42-bf85-a10d8e531581-lib-modules\") pod \"kube-proxy-5j75j\" (UID: \"b8d011cc-6813-4c42-bf85-a10d8e531581\") " pod="kube-system/kube-proxy-5j75j" Sep 13 00:05:08.471910 kubelet[2765]: I0913 00:05:08.471910 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-cilium-run\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472138 kubelet[2765]: I0913 00:05:08.471936 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-host-proc-sys-kernel\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472138 kubelet[2765]: I0913 00:05:08.471957 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/283ba863-3c4a-4b64-8c59-5547b598240b-cilium-config-path\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472138 kubelet[2765]: I0913 00:05:08.472026 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8d011cc-6813-4c42-bf85-a10d8e531581-kube-proxy\") pod \"kube-proxy-5j75j\" (UID: \"b8d011cc-6813-4c42-bf85-a10d8e531581\") " pod="kube-system/kube-proxy-5j75j" Sep 13 00:05:08.472138 kubelet[2765]: I0913 00:05:08.472086 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-hostproc\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472138 kubelet[2765]: I0913 00:05:08.472120 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-cilium-cgroup\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472138 kubelet[2765]: I0913 00:05:08.472138 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-xtables-lock\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472328 kubelet[2765]: I0913 00:05:08.472205 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlhvl\" (UniqueName: \"kubernetes.io/projected/b8d011cc-6813-4c42-bf85-a10d8e531581-kube-api-access-zlhvl\") pod \"kube-proxy-5j75j\" (UID: \"b8d011cc-6813-4c42-bf85-a10d8e531581\") " pod="kube-system/kube-proxy-5j75j" Sep 13 00:05:08.472328 kubelet[2765]: I0913 00:05:08.472295 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8d011cc-6813-4c42-bf85-a10d8e531581-xtables-lock\") pod \"kube-proxy-5j75j\" (UID: \"b8d011cc-6813-4c42-bf85-a10d8e531581\") " pod="kube-system/kube-proxy-5j75j" Sep 13 00:05:08.472328 kubelet[2765]: I0913 00:05:08.472313 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-bpf-maps\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472328 kubelet[2765]: I0913 00:05:08.472327 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-cni-path\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472455 kubelet[2765]: I0913 00:05:08.472342 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-lib-modules\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472455 kubelet[2765]: I0913 00:05:08.472361 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-host-proc-sys-net\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472455 kubelet[2765]: I0913 00:05:08.472389 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-etc-cni-netd\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472455 kubelet[2765]: I0913 00:05:08.472407 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/283ba863-3c4a-4b64-8c59-5547b598240b-hubble-tls\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472455 kubelet[2765]: I0913 00:05:08.472422 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/283ba863-3c4a-4b64-8c59-5547b598240b-clustermesh-secrets\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.472455 kubelet[2765]: I0913 00:05:08.472438 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wqxh\" (UniqueName: \"kubernetes.io/projected/283ba863-3c4a-4b64-8c59-5547b598240b-kube-api-access-2wqxh\") pod \"cilium-bj4cq\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " pod="kube-system/cilium-bj4cq" Sep 13 00:05:08.774448 kubelet[2765]: I0913 00:05:08.774401 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b569177-58c3-45bc-aa33-16e435816c8f-cilium-config-path\") pod \"cilium-operator-5d85765b45-md67x\" (UID: \"5b569177-58c3-45bc-aa33-16e435816c8f\") " pod="kube-system/cilium-operator-5d85765b45-md67x" Sep 13 00:05:08.774448 kubelet[2765]: I0913 00:05:08.774451 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5j4n\" (UniqueName: \"kubernetes.io/projected/5b569177-58c3-45bc-aa33-16e435816c8f-kube-api-access-x5j4n\") pod \"cilium-operator-5d85765b45-md67x\" (UID: \"5b569177-58c3-45bc-aa33-16e435816c8f\") " pod="kube-system/cilium-operator-5d85765b45-md67x" Sep 13 00:05:08.982807 kubelet[2765]: E0913 00:05:08.982750 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:08.983411 kubelet[2765]: E0913 00:05:08.983362 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:08.983842 containerd[1583]: time="2025-09-13T00:05:08.983793892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5j75j,Uid:b8d011cc-6813-4c42-bf85-a10d8e531581,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:08.984473 containerd[1583]: time="2025-09-13T00:05:08.984385894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bj4cq,Uid:283ba863-3c4a-4b64-8c59-5547b598240b,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:09.056322 kubelet[2765]: E0913 00:05:09.056001 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:09.056763 containerd[1583]: time="2025-09-13T00:05:09.056693555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-md67x,Uid:5b569177-58c3-45bc-aa33-16e435816c8f,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:09.822097 containerd[1583]: time="2025-09-13T00:05:09.821982965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:09.822097 containerd[1583]: time="2025-09-13T00:05:09.822043127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:09.822097 containerd[1583]: time="2025-09-13T00:05:09.822054428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:09.822351 containerd[1583]: time="2025-09-13T00:05:09.822151048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:09.880761 containerd[1583]: time="2025-09-13T00:05:09.880681541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5j75j,Uid:b8d011cc-6813-4c42-bf85-a10d8e531581,Namespace:kube-system,Attempt:0,} returns sandbox id \"714ddc27a7af9b4d40b13674b447d6f2fac59a861243a713d71dbc27ba9ee0b4\"" Sep 13 00:05:09.881637 kubelet[2765]: E0913 00:05:09.881611 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:09.883704 containerd[1583]: time="2025-09-13T00:05:09.883665349Z" level=info msg="CreateContainer within sandbox \"714ddc27a7af9b4d40b13674b447d6f2fac59a861243a713d71dbc27ba9ee0b4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:05:09.981790 containerd[1583]: time="2025-09-13T00:05:09.981688033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:09.981927 containerd[1583]: time="2025-09-13T00:05:09.981798269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:09.981927 containerd[1583]: time="2025-09-13T00:05:09.981816392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:09.982109 containerd[1583]: time="2025-09-13T00:05:09.982063542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:10.031389 containerd[1583]: time="2025-09-13T00:05:10.031300506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bj4cq,Uid:283ba863-3c4a-4b64-8c59-5547b598240b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\"" Sep 13 00:05:10.032395 kubelet[2765]: E0913 00:05:10.032344 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:10.033621 containerd[1583]: time="2025-09-13T00:05:10.033564429Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:05:10.381987 containerd[1583]: time="2025-09-13T00:05:10.381181824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:10.382165 containerd[1583]: time="2025-09-13T00:05:10.382003055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:10.382165 containerd[1583]: time="2025-09-13T00:05:10.382054492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:10.382235 containerd[1583]: time="2025-09-13T00:05:10.382166912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:10.437664 containerd[1583]: time="2025-09-13T00:05:10.437619645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-md67x,Uid:5b569177-58c3-45bc-aa33-16e435816c8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\"" Sep 13 00:05:10.438211 kubelet[2765]: E0913 00:05:10.438175 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:10.590119 kubelet[2765]: E0913 00:05:10.590074 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:10.682275 containerd[1583]: time="2025-09-13T00:05:10.682069334Z" level=info msg="CreateContainer within sandbox \"714ddc27a7af9b4d40b13674b447d6f2fac59a861243a713d71dbc27ba9ee0b4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82cc3966b7f5b8afdb6503f6d1073430138b3220892c830d24ca464b3f708ab6\"" Sep 13 00:05:10.683101 containerd[1583]: time="2025-09-13T00:05:10.683036017Z" level=info msg="StartContainer for \"82cc3966b7f5b8afdb6503f6d1073430138b3220892c830d24ca464b3f708ab6\"" Sep 13 00:05:10.837302 containerd[1583]: time="2025-09-13T00:05:10.837217689Z" level=info msg="StartContainer for \"82cc3966b7f5b8afdb6503f6d1073430138b3220892c830d24ca464b3f708ab6\" returns successfully" Sep 13 00:05:11.271164 kubelet[2765]: E0913 00:05:11.271116 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:11.272704 kubelet[2765]: E0913 00:05:11.272685 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:11.861760 kubelet[2765]: I0913 00:05:11.861664 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5j75j" podStartSLOduration=4.861637578 podStartE2EDuration="4.861637578s" podCreationTimestamp="2025-09-13 00:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:11.497625073 +0000 UTC m=+9.350525949" watchObservedRunningTime="2025-09-13 00:05:11.861637578 +0000 UTC m=+9.714538454" Sep 13 00:05:12.274357 kubelet[2765]: E0913 00:05:12.274280 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:19.683067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167546306.mount: Deactivated successfully. Sep 13 00:05:32.050836 systemd-resolved[1458]: Under memory pressure, flushing caches. Sep 13 00:05:32.085979 systemd-journald[1161]: Under memory pressure, flushing caches. Sep 13 00:05:32.050903 systemd-resolved[1458]: Flushed all caches. Sep 13 00:05:32.594072 containerd[1583]: time="2025-09-13T00:05:32.594001279Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:32.653046 containerd[1583]: time="2025-09-13T00:05:32.652956864Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 13 00:05:32.702146 containerd[1583]: time="2025-09-13T00:05:32.702074432Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:32.703688 containerd[1583]: time="2025-09-13T00:05:32.703634170Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 22.669929329s" Sep 13 00:05:32.703688 containerd[1583]: time="2025-09-13T00:05:32.703673434Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:05:32.705041 containerd[1583]: time="2025-09-13T00:05:32.705010108Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:05:32.723297 containerd[1583]: time="2025-09-13T00:05:32.723243610Z" level=info msg="CreateContainer within sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:05:33.836746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2639102462.mount: Deactivated successfully. Sep 13 00:05:34.632525 containerd[1583]: time="2025-09-13T00:05:34.632447034Z" level=info msg="CreateContainer within sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\"" Sep 13 00:05:34.633207 containerd[1583]: time="2025-09-13T00:05:34.633097343Z" level=info msg="StartContainer for \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\"" Sep 13 00:05:35.071957 containerd[1583]: time="2025-09-13T00:05:35.071779745Z" level=info msg="StartContainer for \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\" returns successfully" Sep 13 00:05:35.091389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9-rootfs.mount: Deactivated successfully. Sep 13 00:05:35.410617 kubelet[2765]: E0913 00:05:35.410553 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:36.412364 kubelet[2765]: E0913 00:05:36.412327 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:36.796683 containerd[1583]: time="2025-09-13T00:05:36.794482539Z" level=info msg="shim disconnected" id=4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9 namespace=k8s.io Sep 13 00:05:36.796683 containerd[1583]: time="2025-09-13T00:05:36.796567725Z" level=warning msg="cleaning up after shim disconnected" id=4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9 namespace=k8s.io Sep 13 00:05:36.796683 containerd[1583]: time="2025-09-13T00:05:36.796586010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:05:37.416820 kubelet[2765]: E0913 00:05:37.416762 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:37.419685 containerd[1583]: time="2025-09-13T00:05:37.419633399Z" level=info msg="CreateContainer within sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:05:39.236556 containerd[1583]: time="2025-09-13T00:05:39.236485226Z" level=info msg="CreateContainer within sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\"" Sep 13 00:05:39.237084 containerd[1583]: time="2025-09-13T00:05:39.236976724Z" level=info msg="StartContainer for \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\"" Sep 13 00:05:39.332930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:05:39.333468 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:39.333538 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:05:39.340888 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:05:39.505560 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:39.632740 containerd[1583]: time="2025-09-13T00:05:39.632655656Z" level=info msg="StartContainer for \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\" returns successfully" Sep 13 00:05:39.651707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e-rootfs.mount: Deactivated successfully. Sep 13 00:05:40.132906 containerd[1583]: time="2025-09-13T00:05:40.132830653Z" level=info msg="shim disconnected" id=a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e namespace=k8s.io Sep 13 00:05:40.132906 containerd[1583]: time="2025-09-13T00:05:40.132889987Z" level=warning msg="cleaning up after shim disconnected" id=a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e namespace=k8s.io Sep 13 00:05:40.132906 containerd[1583]: time="2025-09-13T00:05:40.132913380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:05:40.639848 kubelet[2765]: E0913 00:05:40.639796 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:40.641976 containerd[1583]: time="2025-09-13T00:05:40.641913667Z" level=info msg="CreateContainer within sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:05:41.495778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3507185983.mount: Deactivated successfully. Sep 13 00:05:42.595977 containerd[1583]: time="2025-09-13T00:05:42.595873199Z" level=info msg="CreateContainer within sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\"" Sep 13 00:05:42.596752 containerd[1583]: time="2025-09-13T00:05:42.596536527Z" level=info msg="StartContainer for \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\"" Sep 13 00:05:42.980184 containerd[1583]: time="2025-09-13T00:05:42.980070960Z" level=info msg="StartContainer for \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\" returns successfully" Sep 13 00:05:43.000993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80-rootfs.mount: Deactivated successfully. Sep 13 00:05:43.249681 containerd[1583]: time="2025-09-13T00:05:43.249481662Z" level=info msg="shim disconnected" id=f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80 namespace=k8s.io Sep 13 00:05:43.249681 containerd[1583]: time="2025-09-13T00:05:43.249545243Z" level=warning msg="cleaning up after shim disconnected" id=f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80 namespace=k8s.io Sep 13 00:05:43.249681 containerd[1583]: time="2025-09-13T00:05:43.249556535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:05:43.654408 kubelet[2765]: E0913 00:05:43.654369 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:43.656112 containerd[1583]: time="2025-09-13T00:05:43.656061622Z" level=info msg="CreateContainer within sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:05:44.866930 containerd[1583]: time="2025-09-13T00:05:44.866867223Z" level=info msg="CreateContainer within sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\"" Sep 13 00:05:44.867487 containerd[1583]: time="2025-09-13T00:05:44.867366249Z" level=info msg="StartContainer for \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\"" Sep 13 00:05:46.245287 containerd[1583]: time="2025-09-13T00:05:46.245238517Z" level=info msg="StartContainer for \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\" returns successfully" Sep 13 00:05:46.262545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e-rootfs.mount: Deactivated successfully. Sep 13 00:05:46.611034 containerd[1583]: time="2025-09-13T00:05:46.610781677Z" level=info msg="shim disconnected" id=0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e namespace=k8s.io Sep 13 00:05:46.611034 containerd[1583]: time="2025-09-13T00:05:46.610891558Z" level=warning msg="cleaning up after shim disconnected" id=0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e namespace=k8s.io Sep 13 00:05:46.611034 containerd[1583]: time="2025-09-13T00:05:46.610909822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:05:47.149207 containerd[1583]: time="2025-09-13T00:05:47.149109096Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:47.164427 containerd[1583]: time="2025-09-13T00:05:47.164331464Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 13 00:05:47.187250 containerd[1583]: time="2025-09-13T00:05:47.187195009Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:47.189124 containerd[1583]: time="2025-09-13T00:05:47.189057549Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 14.484014149s" Sep 13 00:05:47.189232 containerd[1583]: time="2025-09-13T00:05:47.189129056Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:05:47.191703 containerd[1583]: time="2025-09-13T00:05:47.191640842Z" level=info msg="CreateContainer within sandbox \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:05:47.258285 kubelet[2765]: E0913 00:05:47.258242 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:47.261141 containerd[1583]: time="2025-09-13T00:05:47.261078742Z" level=info msg="CreateContainer within sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:05:47.591964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111134628.mount: Deactivated successfully. Sep 13 00:05:47.846301 containerd[1583]: time="2025-09-13T00:05:47.846141806Z" level=info msg="CreateContainer within sandbox \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\"" Sep 13 00:05:47.846827 containerd[1583]: time="2025-09-13T00:05:47.846747356Z" level=info msg="StartContainer for \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\"" Sep 13 00:05:48.363986 containerd[1583]: time="2025-09-13T00:05:48.363848011Z" level=info msg="StartContainer for \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\" returns successfully" Sep 13 00:05:48.510004 containerd[1583]: time="2025-09-13T00:05:48.509931862Z" level=info msg="CreateContainer within sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8\"" Sep 13 00:05:48.512273 containerd[1583]: time="2025-09-13T00:05:48.512239127Z" level=info msg="StartContainer for \"716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8\"" Sep 13 00:05:48.559643 systemd[1]: run-containerd-runc-k8s.io-716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8-runc.aZ9grc.mount: Deactivated successfully. Sep 13 00:05:48.785005 containerd[1583]: time="2025-09-13T00:05:48.784925898Z" level=info msg="StartContainer for \"716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8\" returns successfully" Sep 13 00:05:48.947449 kubelet[2765]: I0913 00:05:48.947406 2765 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:05:49.370635 kubelet[2765]: E0913 00:05:49.370419 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:49.370635 kubelet[2765]: E0913 00:05:49.370499 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:50.118548 kubelet[2765]: I0913 00:05:50.115643 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-md67x" podStartSLOduration=6.364319856 podStartE2EDuration="43.115620634s" podCreationTimestamp="2025-09-13 00:05:07 +0000 UTC" firstStartedPulling="2025-09-13 00:05:10.438718565 +0000 UTC m=+8.291619451" lastFinishedPulling="2025-09-13 00:05:47.190019353 +0000 UTC m=+45.042920229" observedRunningTime="2025-09-13 00:05:50.115186952 +0000 UTC m=+47.968087828" watchObservedRunningTime="2025-09-13 00:05:50.115620634 +0000 UTC m=+47.968521510" Sep 13 00:05:50.232634 kubelet[2765]: I0913 00:05:50.229768 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4vf9\" (UniqueName: \"kubernetes.io/projected/e1acdd13-45f5-4bf0-ac84-06d4479f8bcc-kube-api-access-l4vf9\") pod \"coredns-7c65d6cfc9-zxczx\" (UID: \"e1acdd13-45f5-4bf0-ac84-06d4479f8bcc\") " pod="kube-system/coredns-7c65d6cfc9-zxczx" Sep 13 00:05:50.232634 kubelet[2765]: I0913 00:05:50.229837 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1acdd13-45f5-4bf0-ac84-06d4479f8bcc-config-volume\") pod \"coredns-7c65d6cfc9-zxczx\" (UID: \"e1acdd13-45f5-4bf0-ac84-06d4479f8bcc\") " pod="kube-system/coredns-7c65d6cfc9-zxczx" Sep 13 00:05:50.232634 kubelet[2765]: I0913 00:05:50.229886 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/846a6438-2a5f-4b94-a3b1-b944d22d397b-config-volume\") pod \"coredns-7c65d6cfc9-q58pd\" (UID: \"846a6438-2a5f-4b94-a3b1-b944d22d397b\") " pod="kube-system/coredns-7c65d6cfc9-q58pd" Sep 13 00:05:50.232634 kubelet[2765]: I0913 00:05:50.229907 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjst5\" (UniqueName: \"kubernetes.io/projected/846a6438-2a5f-4b94-a3b1-b944d22d397b-kube-api-access-zjst5\") pod \"coredns-7c65d6cfc9-q58pd\" (UID: \"846a6438-2a5f-4b94-a3b1-b944d22d397b\") " pod="kube-system/coredns-7c65d6cfc9-q58pd" Sep 13 00:05:50.372124 kubelet[2765]: E0913 00:05:50.372065 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:50.372329 kubelet[2765]: E0913 00:05:50.372211 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:50.693712 kubelet[2765]: E0913 00:05:50.693531 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:50.693712 kubelet[2765]: E0913 00:05:50.693546 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:50.696936 containerd[1583]: time="2025-09-13T00:05:50.696896901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q58pd,Uid:846a6438-2a5f-4b94-a3b1-b944d22d397b,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:50.697364 containerd[1583]: time="2025-09-13T00:05:50.696896831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zxczx,Uid:e1acdd13-45f5-4bf0-ac84-06d4479f8bcc,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:51.207972 kubelet[2765]: I0913 00:05:51.207910 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bj4cq" podStartSLOduration=21.536028939 podStartE2EDuration="44.207887653s" podCreationTimestamp="2025-09-13 00:05:07 +0000 UTC" firstStartedPulling="2025-09-13 00:05:10.033008582 +0000 UTC m=+7.885909458" lastFinishedPulling="2025-09-13 00:05:32.704867296 +0000 UTC m=+30.557768172" observedRunningTime="2025-09-13 00:05:51.207792581 +0000 UTC m=+49.060693457" watchObservedRunningTime="2025-09-13 00:05:51.207887653 +0000 UTC m=+49.060788529" Sep 13 00:05:51.374768 kubelet[2765]: E0913 00:05:51.374710 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:53.070946 systemd-networkd[1250]: cilium_host: Link UP Sep 13 00:05:53.071193 systemd-networkd[1250]: cilium_net: Link UP Sep 13 00:05:53.071198 systemd-networkd[1250]: cilium_net: Gained carrier Sep 13 00:05:53.071453 systemd-networkd[1250]: cilium_host: Gained carrier Sep 13 00:05:53.071778 systemd-networkd[1250]: cilium_host: Gained IPv6LL Sep 13 00:05:53.197141 systemd-networkd[1250]: cilium_vxlan: Link UP Sep 13 00:05:53.197154 systemd-networkd[1250]: cilium_vxlan: Gained carrier Sep 13 00:05:53.578659 kernel: NET: Registered PF_ALG protocol family Sep 13 00:05:53.810818 systemd-networkd[1250]: cilium_net: Gained IPv6LL Sep 13 00:05:54.309690 systemd-networkd[1250]: lxc_health: Link UP Sep 13 00:05:54.320576 systemd-networkd[1250]: lxc_health: Gained carrier Sep 13 00:05:54.323885 systemd-networkd[1250]: cilium_vxlan: Gained IPv6LL Sep 13 00:05:54.834160 systemd-networkd[1250]: lxcb8818fda59a0: Link UP Sep 13 00:05:54.842655 kernel: eth0: renamed from tmp01d9a Sep 13 00:05:54.857762 systemd-networkd[1250]: lxc3d29e520e82c: Link UP Sep 13 00:05:54.867333 systemd-networkd[1250]: lxcb8818fda59a0: Gained carrier Sep 13 00:05:54.869644 kernel: eth0: renamed from tmpae87f Sep 13 00:05:54.878809 systemd-networkd[1250]: lxc3d29e520e82c: Gained carrier Sep 13 00:05:54.988728 kubelet[2765]: E0913 00:05:54.986909 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:55.384513 kubelet[2765]: E0913 00:05:55.384467 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:55.922868 systemd-networkd[1250]: lxc_health: Gained IPv6LL Sep 13 00:05:55.923232 systemd-networkd[1250]: lxc3d29e520e82c: Gained IPv6LL Sep 13 00:05:56.386723 kubelet[2765]: E0913 00:05:56.386671 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:56.626803 systemd-networkd[1250]: lxcb8818fda59a0: Gained IPv6LL Sep 13 00:05:58.744631 containerd[1583]: time="2025-09-13T00:05:58.743718973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:58.744631 containerd[1583]: time="2025-09-13T00:05:58.743781633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:58.744631 containerd[1583]: time="2025-09-13T00:05:58.743795990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:58.750964 containerd[1583]: time="2025-09-13T00:05:58.749640754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:58.750964 containerd[1583]: time="2025-09-13T00:05:58.749818526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:58.750964 containerd[1583]: time="2025-09-13T00:05:58.750030464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:58.752001 containerd[1583]: time="2025-09-13T00:05:58.751110321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:58.752001 containerd[1583]: time="2025-09-13T00:05:58.751777494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:58.777514 systemd[1]: run-containerd-runc-k8s.io-ae87f96edf1a724728cbfc54960069743d4a632c5c32888129dbcb5bc68f6301-runc.ALKW91.mount: Deactivated successfully. Sep 13 00:05:58.793273 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:05:58.802946 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:05:58.836638 containerd[1583]: time="2025-09-13T00:05:58.836169554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q58pd,Uid:846a6438-2a5f-4b94-a3b1-b944d22d397b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae87f96edf1a724728cbfc54960069743d4a632c5c32888129dbcb5bc68f6301\"" Sep 13 00:05:58.839805 kubelet[2765]: E0913 00:05:58.839765 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:58.843545 containerd[1583]: time="2025-09-13T00:05:58.843407177Z" level=info msg="CreateContainer within sandbox \"ae87f96edf1a724728cbfc54960069743d4a632c5c32888129dbcb5bc68f6301\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:05:58.858344 containerd[1583]: time="2025-09-13T00:05:58.858305927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zxczx,Uid:e1acdd13-45f5-4bf0-ac84-06d4479f8bcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"01d9afc02ce41903dc8301016411bd65864e3710bf770ace1a95f63a8b16d7ad\"" Sep 13 00:05:58.859189 kubelet[2765]: E0913 00:05:58.859144 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:58.860758 containerd[1583]: time="2025-09-13T00:05:58.860720332Z" level=info msg="CreateContainer within sandbox \"01d9afc02ce41903dc8301016411bd65864e3710bf770ace1a95f63a8b16d7ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:06:00.160591 containerd[1583]: time="2025-09-13T00:06:00.160533918Z" level=info msg="CreateContainer within sandbox \"ae87f96edf1a724728cbfc54960069743d4a632c5c32888129dbcb5bc68f6301\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"906652aa4e368267f274f5c9b7139401285fb15cd4ae0ff775a2e73651398c0c\"" Sep 13 00:06:00.161224 containerd[1583]: time="2025-09-13T00:06:00.161197976Z" level=info msg="StartContainer for \"906652aa4e368267f274f5c9b7139401285fb15cd4ae0ff775a2e73651398c0c\"" Sep 13 00:06:00.533672 containerd[1583]: time="2025-09-13T00:06:00.533181282Z" level=info msg="StartContainer for \"906652aa4e368267f274f5c9b7139401285fb15cd4ae0ff775a2e73651398c0c\" returns successfully" Sep 13 00:06:00.533672 containerd[1583]: time="2025-09-13T00:06:00.533235787Z" level=info msg="CreateContainer within sandbox \"01d9afc02ce41903dc8301016411bd65864e3710bf770ace1a95f63a8b16d7ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9274c1dfbfb0b8427f15f2bb78f435ab4b4f208003b37a348343369d3e5aae06\"" Sep 13 00:06:00.534501 containerd[1583]: time="2025-09-13T00:06:00.534442329Z" level=info msg="StartContainer for \"9274c1dfbfb0b8427f15f2bb78f435ab4b4f208003b37a348343369d3e5aae06\"" Sep 13 00:06:00.790070 containerd[1583]: time="2025-09-13T00:06:00.789920966Z" level=info msg="StartContainer for \"9274c1dfbfb0b8427f15f2bb78f435ab4b4f208003b37a348343369d3e5aae06\" returns successfully" Sep 13 00:06:01.540515 kubelet[2765]: E0913 00:06:01.540079 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:01.540515 kubelet[2765]: E0913 00:06:01.540107 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:01.630533 kubelet[2765]: I0913 00:06:01.630073 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zxczx" podStartSLOduration=54.630039708 podStartE2EDuration="54.630039708s" podCreationTimestamp="2025-09-13 00:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:01.62991574 +0000 UTC m=+59.482816616" watchObservedRunningTime="2025-09-13 00:06:01.630039708 +0000 UTC m=+59.482940584" Sep 13 00:06:02.055335 sudo[1790]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:02.059484 sshd[1786]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:02.064981 systemd[1]: sshd@8-10.0.0.70:22-10.0.0.1:42776.service: Deactivated successfully. Sep 13 00:06:02.067887 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:06:02.067918 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:06:02.069720 systemd-logind[1563]: Removed session 9. Sep 13 00:06:02.542843 kubelet[2765]: E0913 00:06:02.541937 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:02.542843 kubelet[2765]: E0913 00:06:02.542146 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:02.601859 kubelet[2765]: I0913 00:06:02.601386 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q58pd" podStartSLOduration=54.601363285 podStartE2EDuration="54.601363285s" podCreationTimestamp="2025-09-13 00:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:01.907323865 +0000 UTC m=+59.760224751" watchObservedRunningTime="2025-09-13 00:06:02.601363285 +0000 UTC m=+60.454264161" Sep 13 00:06:03.543291 kubelet[2765]: E0913 00:06:03.543253 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:03.543291 kubelet[2765]: E0913 00:06:03.543253 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:04.545502 kubelet[2765]: E0913 00:06:04.545462 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:04.546107 kubelet[2765]: E0913 00:06:04.545627 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:14.235763 kubelet[2765]: E0913 00:06:14.235545 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:27.234822 kubelet[2765]: E0913 00:06:27.234626 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:33.235237 kubelet[2765]: E0913 00:06:33.235192 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:33.744968 update_engine[1565]: I20250913 00:06:33.744851 1565 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:06:33.744968 update_engine[1565]: I20250913 00:06:33.744940 1565 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:06:33.745578 update_engine[1565]: I20250913 00:06:33.745329 1565 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:06:33.756295 update_engine[1565]: I20250913 00:06:33.756239 1565 omaha_request_params.cc:62] Current group set to lts Sep 13 00:06:33.757687 update_engine[1565]: I20250913 00:06:33.757649 1565 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:06:33.757687 update_engine[1565]: I20250913 00:06:33.757668 1565 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:06:33.757785 update_engine[1565]: I20250913 00:06:33.757692 1565 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:06:33.757785 update_engine[1565]: I20250913 00:06:33.757756 1565 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:06:33.757886 update_engine[1565]: I20250913 00:06:33.757862 1565 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:06:33.757886 update_engine[1565]: I20250913 00:06:33.757875 1565 omaha_request_action.cc:272] Request: Sep 13 00:06:33.757886 update_engine[1565]: Sep 13 00:06:33.757886 update_engine[1565]: Sep 13 00:06:33.757886 update_engine[1565]: Sep 13 00:06:33.757886 update_engine[1565]: Sep 13 00:06:33.757886 update_engine[1565]: Sep 13 00:06:33.757886 update_engine[1565]: Sep 13 00:06:33.757886 update_engine[1565]: Sep 13 00:06:33.757886 update_engine[1565]: Sep 13 00:06:33.758184 update_engine[1565]: I20250913 00:06:33.757886 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:06:33.758490 locksmithd[1617]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:06:33.761033 update_engine[1565]: I20250913 00:06:33.760998 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:06:33.761354 update_engine[1565]: I20250913 00:06:33.761322 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:06:33.769515 update_engine[1565]: E20250913 00:06:33.769407 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:06:33.769584 update_engine[1565]: I20250913 00:06:33.769538 1565 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:06:35.235219 kubelet[2765]: E0913 00:06:35.235143 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:38.532826 systemd[1]: Started sshd@9-10.0.0.70:22-10.0.0.1:43066.service - OpenSSH per-connection server daemon (10.0.0.1:43066). Sep 13 00:06:38.694284 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 43066 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:06:38.696186 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:38.700700 systemd-logind[1563]: New session 10 of user core. Sep 13 00:06:38.710933 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:06:38.993222 sshd[4285]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:38.998461 systemd[1]: sshd@9-10.0.0.70:22-10.0.0.1:43066.service: Deactivated successfully. Sep 13 00:06:39.001278 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:06:39.001426 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:06:39.002712 systemd-logind[1563]: Removed session 10. Sep 13 00:06:43.736467 update_engine[1565]: I20250913 00:06:43.736337 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:06:43.737147 update_engine[1565]: I20250913 00:06:43.736797 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:06:43.737147 update_engine[1565]: I20250913 00:06:43.737105 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:06:43.745729 update_engine[1565]: E20250913 00:06:43.745684 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:06:43.745797 update_engine[1565]: I20250913 00:06:43.745747 1565 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:06:44.004851 systemd[1]: Started sshd@10-10.0.0.70:22-10.0.0.1:32888.service - OpenSSH per-connection server daemon (10.0.0.1:32888). Sep 13 00:06:44.039945 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 32888 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:06:44.041758 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:44.046148 systemd-logind[1563]: New session 11 of user core. Sep 13 00:06:44.056864 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:06:44.232567 sshd[4309]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:44.237191 systemd[1]: sshd@10-10.0.0.70:22-10.0.0.1:32888.service: Deactivated successfully. Sep 13 00:06:44.240510 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:06:44.241359 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:06:44.242237 systemd-logind[1563]: Removed session 11. Sep 13 00:06:49.240813 systemd[1]: Started sshd@11-10.0.0.70:22-10.0.0.1:32894.service - OpenSSH per-connection server daemon (10.0.0.1:32894). Sep 13 00:06:49.276863 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 32894 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:06:49.278743 sshd[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:49.283279 systemd-logind[1563]: New session 12 of user core. Sep 13 00:06:49.300041 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:06:49.410705 sshd[4325]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:49.415054 systemd[1]: sshd@11-10.0.0.70:22-10.0.0.1:32894.service: Deactivated successfully. Sep 13 00:06:49.417520 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:06:49.417562 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:06:49.418716 systemd-logind[1563]: Removed session 12. Sep 13 00:06:53.736301 update_engine[1565]: I20250913 00:06:53.736158 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:06:53.736946 update_engine[1565]: I20250913 00:06:53.736557 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:06:53.736946 update_engine[1565]: I20250913 00:06:53.736842 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:06:53.744677 update_engine[1565]: E20250913 00:06:53.744613 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:06:53.744677 update_engine[1565]: I20250913 00:06:53.744669 1565 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:06:54.431103 systemd[1]: Started sshd@12-10.0.0.70:22-10.0.0.1:47910.service - OpenSSH per-connection server daemon (10.0.0.1:47910). Sep 13 00:06:54.467163 sshd[4342]: Accepted publickey for core from 10.0.0.1 port 47910 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:06:54.469111 sshd[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:54.474127 systemd-logind[1563]: New session 13 of user core. Sep 13 00:06:54.483928 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:06:54.601964 sshd[4342]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:54.606845 systemd[1]: sshd@12-10.0.0.70:22-10.0.0.1:47910.service: Deactivated successfully. Sep 13 00:06:54.610284 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:06:54.611315 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:06:54.612508 systemd-logind[1563]: Removed session 13. Sep 13 00:06:59.620921 systemd[1]: Started sshd@13-10.0.0.70:22-10.0.0.1:47914.service - OpenSSH per-connection server daemon (10.0.0.1:47914). Sep 13 00:06:59.656328 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 47914 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:06:59.658207 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:59.663670 systemd-logind[1563]: New session 14 of user core. Sep 13 00:06:59.673992 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:06:59.822486 sshd[4359]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:59.827547 systemd[1]: sshd@13-10.0.0.70:22-10.0.0.1:47914.service: Deactivated successfully. Sep 13 00:06:59.830171 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:06:59.830322 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:06:59.831914 systemd-logind[1563]: Removed session 14. Sep 13 00:07:03.736624 update_engine[1565]: I20250913 00:07:03.736515 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:07:03.737217 update_engine[1565]: I20250913 00:07:03.736918 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:07:03.737217 update_engine[1565]: I20250913 00:07:03.737157 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:07:03.755360 update_engine[1565]: E20250913 00:07:03.755307 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:07:03.755424 update_engine[1565]: I20250913 00:07:03.755371 1565 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:07:03.755424 update_engine[1565]: I20250913 00:07:03.755390 1565 omaha_request_action.cc:617] Omaha request response: Sep 13 00:07:03.755525 update_engine[1565]: E20250913 00:07:03.755501 1565 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 13 00:07:03.755556 update_engine[1565]: I20250913 00:07:03.755532 1565 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 00:07:03.755556 update_engine[1565]: I20250913 00:07:03.755539 1565 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:07:03.755556 update_engine[1565]: I20250913 00:07:03.755546 1565 update_attempter.cc:306] Processing Done. Sep 13 00:07:03.755639 update_engine[1565]: E20250913 00:07:03.755563 1565 update_attempter.cc:619] Update failed. Sep 13 00:07:03.755639 update_engine[1565]: I20250913 00:07:03.755573 1565 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 00:07:03.755639 update_engine[1565]: I20250913 00:07:03.755579 1565 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 00:07:03.755639 update_engine[1565]: I20250913 00:07:03.755586 1565 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 00:07:03.755728 update_engine[1565]: I20250913 00:07:03.755679 1565 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:07:03.755728 update_engine[1565]: I20250913 00:07:03.755705 1565 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:07:03.755728 update_engine[1565]: I20250913 00:07:03.755715 1565 omaha_request_action.cc:272] Request: Sep 13 00:07:03.755728 update_engine[1565]: Sep 13 00:07:03.755728 update_engine[1565]: Sep 13 00:07:03.755728 update_engine[1565]: Sep 13 00:07:03.755728 update_engine[1565]: Sep 13 00:07:03.755728 update_engine[1565]: Sep 13 00:07:03.755728 update_engine[1565]: Sep 13 00:07:03.755728 update_engine[1565]: I20250913 00:07:03.755723 1565 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:07:03.755939 update_engine[1565]: I20250913 00:07:03.755913 1565 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:07:03.756111 locksmithd[1617]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 00:07:03.756499 update_engine[1565]: I20250913 00:07:03.756103 1565 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:07:03.771483 update_engine[1565]: E20250913 00:07:03.771435 1565 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:07:03.771548 update_engine[1565]: I20250913 00:07:03.771484 1565 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:07:03.771548 update_engine[1565]: I20250913 00:07:03.771494 1565 omaha_request_action.cc:617] Omaha request response: Sep 13 00:07:03.771548 update_engine[1565]: I20250913 00:07:03.771502 1565 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:07:03.771548 update_engine[1565]: I20250913 00:07:03.771509 1565 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:07:03.771548 update_engine[1565]: I20250913 00:07:03.771515 1565 update_attempter.cc:306] Processing Done. Sep 13 00:07:03.771548 update_engine[1565]: I20250913 00:07:03.771523 1565 update_attempter.cc:310] Error event sent. Sep 13 00:07:03.771548 update_engine[1565]: I20250913 00:07:03.771538 1565 update_check_scheduler.cc:74] Next update check in 43m41s Sep 13 00:07:03.771894 locksmithd[1617]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 00:07:04.235767 kubelet[2765]: E0913 00:07:04.235698 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:04.831829 systemd[1]: Started sshd@14-10.0.0.70:22-10.0.0.1:55080.service - OpenSSH per-connection server daemon (10.0.0.1:55080). Sep 13 00:07:04.866559 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 55080 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:04.882555 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:04.887001 systemd-logind[1563]: New session 15 of user core. Sep 13 00:07:04.896857 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:07:05.311357 sshd[4377]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:05.316175 systemd[1]: sshd@14-10.0.0.70:22-10.0.0.1:55080.service: Deactivated successfully. Sep 13 00:07:05.318969 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:07:05.319038 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:07:05.320660 systemd-logind[1563]: Removed session 15. Sep 13 00:07:10.327934 systemd[1]: Started sshd@15-10.0.0.70:22-10.0.0.1:40414.service - OpenSSH per-connection server daemon (10.0.0.1:40414). Sep 13 00:07:10.364441 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 40414 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:10.366400 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:10.372014 systemd-logind[1563]: New session 16 of user core. Sep 13 00:07:10.382964 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:07:10.567755 sshd[4393]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:10.571900 systemd[1]: sshd@15-10.0.0.70:22-10.0.0.1:40414.service: Deactivated successfully. Sep 13 00:07:10.574453 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:07:10.575255 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:07:10.576130 systemd-logind[1563]: Removed session 16. Sep 13 00:07:15.234981 kubelet[2765]: E0913 00:07:15.234897 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:15.586136 systemd[1]: Started sshd@16-10.0.0.70:22-10.0.0.1:40430.service - OpenSSH per-connection server daemon (10.0.0.1:40430). Sep 13 00:07:15.621545 sshd[4413]: Accepted publickey for core from 10.0.0.1 port 40430 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:15.623694 sshd[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:15.629432 systemd-logind[1563]: New session 17 of user core. Sep 13 00:07:15.639978 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:07:15.758194 sshd[4413]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:15.773026 systemd[1]: Started sshd@17-10.0.0.70:22-10.0.0.1:40432.service - OpenSSH per-connection server daemon (10.0.0.1:40432). Sep 13 00:07:15.773781 systemd[1]: sshd@16-10.0.0.70:22-10.0.0.1:40430.service: Deactivated successfully. Sep 13 00:07:15.778390 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:07:15.778994 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:07:15.780427 systemd-logind[1563]: Removed session 17. Sep 13 00:07:15.808093 sshd[4426]: Accepted publickey for core from 10.0.0.1 port 40432 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:15.809969 sshd[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:15.814133 systemd-logind[1563]: New session 18 of user core. Sep 13 00:07:15.828030 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:07:16.156348 sshd[4426]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:16.164830 systemd[1]: Started sshd@18-10.0.0.70:22-10.0.0.1:40446.service - OpenSSH per-connection server daemon (10.0.0.1:40446). Sep 13 00:07:16.165343 systemd[1]: sshd@17-10.0.0.70:22-10.0.0.1:40432.service: Deactivated successfully. Sep 13 00:07:16.167897 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:07:16.169080 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:07:16.171333 systemd-logind[1563]: Removed session 18. Sep 13 00:07:16.200176 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 40446 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:16.201975 sshd[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:16.206579 systemd-logind[1563]: New session 19 of user core. Sep 13 00:07:16.216934 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:07:16.378011 sshd[4440]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:16.382064 systemd[1]: sshd@18-10.0.0.70:22-10.0.0.1:40446.service: Deactivated successfully. Sep 13 00:07:16.384910 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:07:16.384979 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:07:16.386305 systemd-logind[1563]: Removed session 19. Sep 13 00:07:20.234522 kubelet[2765]: E0913 00:07:20.234471 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:21.396104 systemd[1]: Started sshd@19-10.0.0.70:22-10.0.0.1:52686.service - OpenSSH per-connection server daemon (10.0.0.1:52686). Sep 13 00:07:21.432235 sshd[4459]: Accepted publickey for core from 10.0.0.1 port 52686 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:21.434126 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:21.438728 systemd-logind[1563]: New session 20 of user core. Sep 13 00:07:21.448987 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:07:21.570937 sshd[4459]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:21.575887 systemd[1]: sshd@19-10.0.0.70:22-10.0.0.1:52686.service: Deactivated successfully. Sep 13 00:07:21.578471 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:07:21.578530 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:07:21.579898 systemd-logind[1563]: Removed session 20. Sep 13 00:07:26.235428 kubelet[2765]: E0913 00:07:26.235354 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:26.590066 systemd[1]: Started sshd@20-10.0.0.70:22-10.0.0.1:52688.service - OpenSSH per-connection server daemon (10.0.0.1:52688). Sep 13 00:07:26.624258 sshd[4474]: Accepted publickey for core from 10.0.0.1 port 52688 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:26.626214 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:26.630757 systemd-logind[1563]: New session 21 of user core. Sep 13 00:07:26.638184 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:07:26.760553 sshd[4474]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:26.764789 systemd[1]: sshd@20-10.0.0.70:22-10.0.0.1:52688.service: Deactivated successfully. Sep 13 00:07:26.767248 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:07:26.767318 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:07:26.768360 systemd-logind[1563]: Removed session 21. Sep 13 00:07:31.769830 systemd[1]: Started sshd@21-10.0.0.70:22-10.0.0.1:48300.service - OpenSSH per-connection server daemon (10.0.0.1:48300). Sep 13 00:07:31.803441 sshd[4490]: Accepted publickey for core from 10.0.0.1 port 48300 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:31.805295 sshd[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:31.809375 systemd-logind[1563]: New session 22 of user core. Sep 13 00:07:31.818894 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:07:31.929477 sshd[4490]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:31.936967 systemd[1]: Started sshd@22-10.0.0.70:22-10.0.0.1:48308.service - OpenSSH per-connection server daemon (10.0.0.1:48308). Sep 13 00:07:31.937648 systemd[1]: sshd@21-10.0.0.70:22-10.0.0.1:48300.service: Deactivated successfully. Sep 13 00:07:31.940345 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:07:31.942643 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:07:31.944860 systemd-logind[1563]: Removed session 22. Sep 13 00:07:31.975235 sshd[4503]: Accepted publickey for core from 10.0.0.1 port 48308 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:31.977380 sshd[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:31.982853 systemd-logind[1563]: New session 23 of user core. Sep 13 00:07:31.992889 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:07:32.303334 sshd[4503]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:32.310229 systemd[1]: Started sshd@23-10.0.0.70:22-10.0.0.1:48324.service - OpenSSH per-connection server daemon (10.0.0.1:48324). Sep 13 00:07:32.310856 systemd[1]: sshd@22-10.0.0.70:22-10.0.0.1:48308.service: Deactivated successfully. Sep 13 00:07:32.315384 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:07:32.317235 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:07:32.318871 systemd-logind[1563]: Removed session 23. Sep 13 00:07:32.358046 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 48324 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:32.360573 sshd[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:32.365809 systemd-logind[1563]: New session 24 of user core. Sep 13 00:07:32.373869 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:07:33.234745 kubelet[2765]: E0913 00:07:33.234666 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:33.657839 sshd[4516]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:33.667569 systemd[1]: Started sshd@24-10.0.0.70:22-10.0.0.1:48334.service - OpenSSH per-connection server daemon (10.0.0.1:48334). Sep 13 00:07:33.668261 systemd[1]: sshd@23-10.0.0.70:22-10.0.0.1:48324.service: Deactivated successfully. Sep 13 00:07:33.674313 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:07:33.678327 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:07:33.681591 systemd-logind[1563]: Removed session 24. Sep 13 00:07:33.712629 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 48334 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:33.714941 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:33.720949 systemd-logind[1563]: New session 25 of user core. Sep 13 00:07:33.736247 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:07:34.040485 sshd[4535]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:34.047938 systemd[1]: Started sshd@25-10.0.0.70:22-10.0.0.1:48348.service - OpenSSH per-connection server daemon (10.0.0.1:48348). Sep 13 00:07:34.048664 systemd[1]: sshd@24-10.0.0.70:22-10.0.0.1:48334.service: Deactivated successfully. Sep 13 00:07:34.052625 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:07:34.054643 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:07:34.056552 systemd-logind[1563]: Removed session 25. Sep 13 00:07:34.090027 sshd[4551]: Accepted publickey for core from 10.0.0.1 port 48348 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:34.092360 sshd[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:34.098936 systemd-logind[1563]: New session 26 of user core. Sep 13 00:07:34.112052 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:07:34.236676 sshd[4551]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:34.241667 systemd[1]: sshd@25-10.0.0.70:22-10.0.0.1:48348.service: Deactivated successfully. Sep 13 00:07:34.244673 systemd-logind[1563]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:07:34.244707 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:07:34.246734 systemd-logind[1563]: Removed session 26. Sep 13 00:07:38.235569 kubelet[2765]: E0913 00:07:38.235519 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:39.251201 systemd[1]: Started sshd@26-10.0.0.70:22-10.0.0.1:48358.service - OpenSSH per-connection server daemon (10.0.0.1:48358). Sep 13 00:07:39.288202 sshd[4571]: Accepted publickey for core from 10.0.0.1 port 48358 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:39.290009 sshd[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:39.297495 systemd-logind[1563]: New session 27 of user core. Sep 13 00:07:39.307882 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:07:39.435874 sshd[4571]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:39.440000 systemd[1]: sshd@26-10.0.0.70:22-10.0.0.1:48358.service: Deactivated successfully. Sep 13 00:07:39.442869 systemd-logind[1563]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:07:39.442976 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:07:39.444541 systemd-logind[1563]: Removed session 27. Sep 13 00:07:41.235506 kubelet[2765]: E0913 00:07:41.235451 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:44.459929 systemd[1]: Started sshd@27-10.0.0.70:22-10.0.0.1:48872.service - OpenSSH per-connection server daemon (10.0.0.1:48872). Sep 13 00:07:44.494087 sshd[4589]: Accepted publickey for core from 10.0.0.1 port 48872 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:44.495827 sshd[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:44.500505 systemd-logind[1563]: New session 28 of user core. Sep 13 00:07:44.520031 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 00:07:44.639528 sshd[4589]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:44.644785 systemd[1]: sshd@27-10.0.0.70:22-10.0.0.1:48872.service: Deactivated successfully. Sep 13 00:07:44.648237 systemd-logind[1563]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:07:44.648373 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:07:44.650177 systemd-logind[1563]: Removed session 28. Sep 13 00:07:49.656951 systemd[1]: Started sshd@28-10.0.0.70:22-10.0.0.1:48874.service - OpenSSH per-connection server daemon (10.0.0.1:48874). Sep 13 00:07:49.696194 sshd[4608]: Accepted publickey for core from 10.0.0.1 port 48874 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:49.705265 sshd[4608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:49.709640 systemd-logind[1563]: New session 29 of user core. Sep 13 00:07:49.727090 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 13 00:07:49.842075 sshd[4608]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:49.846807 systemd[1]: sshd@28-10.0.0.70:22-10.0.0.1:48874.service: Deactivated successfully. Sep 13 00:07:49.850285 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 00:07:49.850561 systemd-logind[1563]: Session 29 logged out. Waiting for processes to exit. Sep 13 00:07:49.852750 systemd-logind[1563]: Removed session 29. Sep 13 00:07:54.864156 systemd[1]: Started sshd@29-10.0.0.70:22-10.0.0.1:41014.service - OpenSSH per-connection server daemon (10.0.0.1:41014). Sep 13 00:07:54.902251 sshd[4623]: Accepted publickey for core from 10.0.0.1 port 41014 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:07:54.904412 sshd[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:54.909704 systemd-logind[1563]: New session 30 of user core. Sep 13 00:07:54.918015 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 13 00:07:55.035402 sshd[4623]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:55.040775 systemd[1]: sshd@29-10.0.0.70:22-10.0.0.1:41014.service: Deactivated successfully. Sep 13 00:07:55.043710 systemd-logind[1563]: Session 30 logged out. Waiting for processes to exit. Sep 13 00:07:55.043905 systemd[1]: session-30.scope: Deactivated successfully. Sep 13 00:07:55.045458 systemd-logind[1563]: Removed session 30. Sep 13 00:07:57.234926 kubelet[2765]: E0913 00:07:57.234717 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:00.049881 systemd[1]: Started sshd@30-10.0.0.70:22-10.0.0.1:44698.service - OpenSSH per-connection server daemon (10.0.0.1:44698). Sep 13 00:08:00.085054 sshd[4638]: Accepted publickey for core from 10.0.0.1 port 44698 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:08:00.086792 sshd[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:00.091164 systemd-logind[1563]: New session 31 of user core. Sep 13 00:08:00.104094 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 13 00:08:00.227013 sshd[4638]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:00.236590 systemd[1]: Started sshd@31-10.0.0.70:22-10.0.0.1:44706.service - OpenSSH per-connection server daemon (10.0.0.1:44706). Sep 13 00:08:00.237354 systemd[1]: sshd@30-10.0.0.70:22-10.0.0.1:44698.service: Deactivated successfully. Sep 13 00:08:00.241246 systemd[1]: session-31.scope: Deactivated successfully. Sep 13 00:08:00.244207 systemd-logind[1563]: Session 31 logged out. Waiting for processes to exit. Sep 13 00:08:00.246351 systemd-logind[1563]: Removed session 31. Sep 13 00:08:00.275515 sshd[4650]: Accepted publickey for core from 10.0.0.1 port 44706 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:08:00.277842 sshd[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:00.282845 systemd-logind[1563]: New session 32 of user core. Sep 13 00:08:00.293050 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 13 00:08:01.680775 containerd[1583]: time="2025-09-13T00:08:01.680649119Z" level=info msg="StopContainer for \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\" with timeout 30 (s)" Sep 13 00:08:01.682133 containerd[1583]: time="2025-09-13T00:08:01.682104992Z" level=info msg="Stop container \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\" with signal terminated" Sep 13 00:08:01.741465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4-rootfs.mount: Deactivated successfully. Sep 13 00:08:01.743996 containerd[1583]: time="2025-09-13T00:08:01.743906141Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:08:01.752836 containerd[1583]: time="2025-09-13T00:08:01.752692188Z" level=info msg="StopContainer for \"716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8\" with timeout 2 (s)" Sep 13 00:08:01.753098 containerd[1583]: time="2025-09-13T00:08:01.753012056Z" level=info msg="Stop container \"716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8\" with signal terminated" Sep 13 00:08:01.763190 systemd-networkd[1250]: lxc_health: Link DOWN Sep 13 00:08:01.763708 systemd-networkd[1250]: lxc_health: Lost carrier Sep 13 00:08:01.783081 containerd[1583]: time="2025-09-13T00:08:01.782970161Z" level=info msg="shim disconnected" id=e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4 namespace=k8s.io Sep 13 00:08:01.783081 containerd[1583]: time="2025-09-13T00:08:01.783050133Z" level=warning msg="cleaning up after shim disconnected" id=e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4 namespace=k8s.io Sep 13 00:08:01.783081 containerd[1583]: time="2025-09-13T00:08:01.783069620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:01.804979 containerd[1583]: time="2025-09-13T00:08:01.804900920Z" level=info msg="StopContainer for \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\" returns successfully" Sep 13 00:08:01.809741 containerd[1583]: time="2025-09-13T00:08:01.809690230Z" level=info msg="StopPodSandbox for \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\"" Sep 13 00:08:01.809875 containerd[1583]: time="2025-09-13T00:08:01.809770843Z" level=info msg="Container to stop \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:01.813382 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88-shm.mount: Deactivated successfully. Sep 13 00:08:01.820416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8-rootfs.mount: Deactivated successfully. Sep 13 00:08:01.839367 containerd[1583]: time="2025-09-13T00:08:01.839233448Z" level=info msg="shim disconnected" id=716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8 namespace=k8s.io Sep 13 00:08:01.839367 containerd[1583]: time="2025-09-13T00:08:01.839361792Z" level=warning msg="cleaning up after shim disconnected" id=716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8 namespace=k8s.io Sep 13 00:08:01.839367 containerd[1583]: time="2025-09-13T00:08:01.839374285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:01.850789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88-rootfs.mount: Deactivated successfully. Sep 13 00:08:01.923411 containerd[1583]: time="2025-09-13T00:08:01.923319767Z" level=info msg="shim disconnected" id=7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88 namespace=k8s.io Sep 13 00:08:01.923411 containerd[1583]: time="2025-09-13T00:08:01.923408215Z" level=warning msg="cleaning up after shim disconnected" id=7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88 namespace=k8s.io Sep 13 00:08:01.923411 containerd[1583]: time="2025-09-13T00:08:01.923422944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:01.933314 containerd[1583]: time="2025-09-13T00:08:01.933159316Z" level=info msg="StopContainer for \"716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8\" returns successfully" Sep 13 00:08:01.934468 containerd[1583]: time="2025-09-13T00:08:01.934432001Z" level=info msg="StopPodSandbox for \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\"" Sep 13 00:08:01.934549 containerd[1583]: time="2025-09-13T00:08:01.934486484Z" level=info msg="Container to stop \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:01.934549 containerd[1583]: time="2025-09-13T00:08:01.934506712Z" level=info msg="Container to stop \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:01.934549 containerd[1583]: time="2025-09-13T00:08:01.934520068Z" level=info msg="Container to stop \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:01.934549 containerd[1583]: time="2025-09-13T00:08:01.934533524Z" level=info msg="Container to stop \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:01.934549 containerd[1583]: time="2025-09-13T00:08:01.934546788Z" level=info msg="Container to stop \"716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:01.946558 containerd[1583]: time="2025-09-13T00:08:01.946499918Z" level=info msg="TearDown network for sandbox \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\" successfully" Sep 13 00:08:01.946558 containerd[1583]: time="2025-09-13T00:08:01.946553590Z" level=info msg="StopPodSandbox for \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\" returns successfully" Sep 13 00:08:02.006969 containerd[1583]: time="2025-09-13T00:08:02.006824186Z" level=info msg="shim disconnected" id=2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892 namespace=k8s.io Sep 13 00:08:02.006969 containerd[1583]: time="2025-09-13T00:08:02.006895201Z" level=warning msg="cleaning up after shim disconnected" id=2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892 namespace=k8s.io Sep 13 00:08:02.006969 containerd[1583]: time="2025-09-13T00:08:02.006906573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:02.026953 containerd[1583]: time="2025-09-13T00:08:02.026889404Z" level=info msg="TearDown network for sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" successfully" Sep 13 00:08:02.026953 containerd[1583]: time="2025-09-13T00:08:02.026937174Z" level=info msg="StopPodSandbox for \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" returns successfully" Sep 13 00:08:02.091112 kubelet[2765]: I0913 00:08:02.091013 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5j4n\" (UniqueName: \"kubernetes.io/projected/5b569177-58c3-45bc-aa33-16e435816c8f-kube-api-access-x5j4n\") pod \"5b569177-58c3-45bc-aa33-16e435816c8f\" (UID: \"5b569177-58c3-45bc-aa33-16e435816c8f\") " Sep 13 00:08:02.091774 kubelet[2765]: I0913 00:08:02.091150 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-xtables-lock\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.091774 kubelet[2765]: I0913 00:08:02.091177 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-lib-modules\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.091774 kubelet[2765]: I0913 00:08:02.091200 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/283ba863-3c4a-4b64-8c59-5547b598240b-hubble-tls\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.091774 kubelet[2765]: I0913 00:08:02.091223 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b569177-58c3-45bc-aa33-16e435816c8f-cilium-config-path\") pod \"5b569177-58c3-45bc-aa33-16e435816c8f\" (UID: \"5b569177-58c3-45bc-aa33-16e435816c8f\") " Sep 13 00:08:02.091774 kubelet[2765]: I0913 00:08:02.091245 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-cilium-run\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.091774 kubelet[2765]: I0913 00:08:02.091263 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/283ba863-3c4a-4b64-8c59-5547b598240b-cilium-config-path\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.092012 kubelet[2765]: I0913 00:08:02.091279 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-cilium-cgroup\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.092012 kubelet[2765]: I0913 00:08:02.091295 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-cni-path\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.092012 kubelet[2765]: I0913 00:08:02.091285 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:02.092012 kubelet[2765]: I0913 00:08:02.091314 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/283ba863-3c4a-4b64-8c59-5547b598240b-clustermesh-secrets\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.092012 kubelet[2765]: I0913 00:08:02.091416 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-bpf-maps\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.092012 kubelet[2765]: I0913 00:08:02.091445 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-hostproc\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.092237 kubelet[2765]: I0913 00:08:02.091469 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-host-proc-sys-kernel\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.092237 kubelet[2765]: I0913 00:08:02.091507 2765 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.092237 kubelet[2765]: I0913 00:08:02.091544 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:02.092237 kubelet[2765]: I0913 00:08:02.091572 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:02.092237 kubelet[2765]: I0913 00:08:02.091623 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-hostproc" (OuterVolumeSpecName: "hostproc") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:02.092388 kubelet[2765]: I0913 00:08:02.091650 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:02.092388 kubelet[2765]: I0913 00:08:02.091670 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:02.092388 kubelet[2765]: I0913 00:08:02.091991 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:02.092388 kubelet[2765]: I0913 00:08:02.092253 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-cni-path" (OuterVolumeSpecName: "cni-path") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:02.096751 kubelet[2765]: I0913 00:08:02.096708 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/283ba863-3c4a-4b64-8c59-5547b598240b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:08:02.097170 kubelet[2765]: I0913 00:08:02.097122 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b569177-58c3-45bc-aa33-16e435816c8f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5b569177-58c3-45bc-aa33-16e435816c8f" (UID: "5b569177-58c3-45bc-aa33-16e435816c8f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:08:02.103681 kubelet[2765]: I0913 00:08:02.103572 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283ba863-3c4a-4b64-8c59-5547b598240b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:08:02.103941 kubelet[2765]: I0913 00:08:02.103861 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283ba863-3c4a-4b64-8c59-5547b598240b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:08:02.104017 kubelet[2765]: I0913 00:08:02.103940 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b569177-58c3-45bc-aa33-16e435816c8f-kube-api-access-x5j4n" (OuterVolumeSpecName: "kube-api-access-x5j4n") pod "5b569177-58c3-45bc-aa33-16e435816c8f" (UID: "5b569177-58c3-45bc-aa33-16e435816c8f"). InnerVolumeSpecName "kube-api-access-x5j4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:08:02.192857 kubelet[2765]: I0913 00:08:02.192649 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wqxh\" (UniqueName: \"kubernetes.io/projected/283ba863-3c4a-4b64-8c59-5547b598240b-kube-api-access-2wqxh\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.192857 kubelet[2765]: I0913 00:08:02.192704 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-etc-cni-netd\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.192857 kubelet[2765]: I0913 00:08:02.192727 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-host-proc-sys-net\") pod \"283ba863-3c4a-4b64-8c59-5547b598240b\" (UID: \"283ba863-3c4a-4b64-8c59-5547b598240b\") " Sep 13 00:08:02.192857 kubelet[2765]: I0913 00:08:02.192784 2765 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.192857 kubelet[2765]: I0913 00:08:02.192802 2765 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/283ba863-3c4a-4b64-8c59-5547b598240b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.192857 kubelet[2765]: I0913 00:08:02.192819 2765 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.192857 kubelet[2765]: I0913 00:08:02.192830 2765 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.193247 kubelet[2765]: I0913 00:08:02.192842 2765 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.193247 kubelet[2765]: I0913 00:08:02.192855 2765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5j4n\" (UniqueName: \"kubernetes.io/projected/5b569177-58c3-45bc-aa33-16e435816c8f-kube-api-access-x5j4n\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.193247 kubelet[2765]: I0913 00:08:02.192869 2765 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.193247 kubelet[2765]: I0913 00:08:02.192880 2765 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/283ba863-3c4a-4b64-8c59-5547b598240b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.193247 kubelet[2765]: I0913 00:08:02.192892 2765 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.193247 kubelet[2765]: I0913 00:08:02.192905 2765 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/283ba863-3c4a-4b64-8c59-5547b598240b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.193247 kubelet[2765]: I0913 00:08:02.192916 2765 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b569177-58c3-45bc-aa33-16e435816c8f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.193247 kubelet[2765]: I0913 00:08:02.192818 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:02.193624 kubelet[2765]: I0913 00:08:02.192850 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:02.193624 kubelet[2765]: I0913 00:08:02.192927 2765 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.196788 kubelet[2765]: I0913 00:08:02.196712 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283ba863-3c4a-4b64-8c59-5547b598240b-kube-api-access-2wqxh" (OuterVolumeSpecName: "kube-api-access-2wqxh") pod "283ba863-3c4a-4b64-8c59-5547b598240b" (UID: "283ba863-3c4a-4b64-8c59-5547b598240b"). InnerVolumeSpecName "kube-api-access-2wqxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:08:02.234472 kubelet[2765]: I0913 00:08:02.234400 2765 scope.go:117] "RemoveContainer" containerID="716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8" Sep 13 00:08:02.236549 containerd[1583]: time="2025-09-13T00:08:02.236493275Z" level=info msg="RemoveContainer for \"716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8\"" Sep 13 00:08:02.293549 kubelet[2765]: I0913 00:08:02.293488 2765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wqxh\" (UniqueName: \"kubernetes.io/projected/283ba863-3c4a-4b64-8c59-5547b598240b-kube-api-access-2wqxh\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.293549 kubelet[2765]: I0913 00:08:02.293525 2765 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.293549 kubelet[2765]: I0913 00:08:02.293535 2765 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/283ba863-3c4a-4b64-8c59-5547b598240b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:02.325148 kubelet[2765]: E0913 00:08:02.325086 2765 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:08:02.454618 containerd[1583]: time="2025-09-13T00:08:02.454370714Z" level=info msg="RemoveContainer for \"716b2880babd39da05a92342d3aa28fce3dae425c4554dcb83404802e3ebc7a8\" returns successfully" Sep 13 00:08:02.455102 kubelet[2765]: I0913 00:08:02.455004 2765 scope.go:117] "RemoveContainer" containerID="4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9" Sep 13 00:08:02.456622 containerd[1583]: time="2025-09-13T00:08:02.456555983Z" level=info msg="RemoveContainer for \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\"" Sep 13 00:08:02.713794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892-rootfs.mount: Deactivated successfully. Sep 13 00:08:02.714034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892-shm.mount: Deactivated successfully. Sep 13 00:08:02.714209 systemd[1]: var-lib-kubelet-pods-5b569177\x2d58c3\x2d45bc\x2daa33\x2d16e435816c8f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx5j4n.mount: Deactivated successfully. Sep 13 00:08:02.714388 systemd[1]: var-lib-kubelet-pods-283ba863\x2d3c4a\x2d4b64\x2d8c59\x2d5547b598240b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2wqxh.mount: Deactivated successfully. Sep 13 00:08:02.714583 systemd[1]: var-lib-kubelet-pods-283ba863\x2d3c4a\x2d4b64\x2d8c59\x2d5547b598240b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:08:02.714803 systemd[1]: var-lib-kubelet-pods-283ba863\x2d3c4a\x2d4b64\x2d8c59\x2d5547b598240b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:08:02.784230 kubelet[2765]: I0913 00:08:02.784182 2765 scope.go:117] "RemoveContainer" containerID="e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4" Sep 13 00:08:02.787710 containerd[1583]: time="2025-09-13T00:08:02.787677578Z" level=info msg="RemoveContainer for \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\"" Sep 13 00:08:02.968923 containerd[1583]: time="2025-09-13T00:08:02.968462961Z" level=info msg="RemoveContainer for \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\" returns successfully" Sep 13 00:08:02.969253 kubelet[2765]: I0913 00:08:02.968811 2765 scope.go:117] "RemoveContainer" containerID="f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80" Sep 13 00:08:02.991730 containerd[1583]: time="2025-09-13T00:08:02.991662960Z" level=info msg="RemoveContainer for \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\"" Sep 13 00:08:03.098764 containerd[1583]: time="2025-09-13T00:08:03.098697604Z" level=info msg="RemoveContainer for \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\" returns successfully" Sep 13 00:08:03.099063 kubelet[2765]: I0913 00:08:03.099006 2765 scope.go:117] "RemoveContainer" containerID="e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4" Sep 13 00:08:03.099454 containerd[1583]: time="2025-09-13T00:08:03.099318052Z" level=error msg="ContainerStatus for \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\": not found" Sep 13 00:08:03.099526 kubelet[2765]: E0913 00:08:03.099492 2765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\": not found" containerID="e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4" Sep 13 00:08:03.099640 kubelet[2765]: I0913 00:08:03.099540 2765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4"} err="failed to get container status \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\": not found" Sep 13 00:08:03.099687 kubelet[2765]: I0913 00:08:03.099649 2765 scope.go:117] "RemoveContainer" containerID="0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e" Sep 13 00:08:03.101004 containerd[1583]: time="2025-09-13T00:08:03.100961513Z" level=info msg="RemoveContainer for \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\"" Sep 13 00:08:03.413260 containerd[1583]: time="2025-09-13T00:08:03.412881126Z" level=info msg="RemoveContainer for \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\" returns successfully" Sep 13 00:08:03.413426 kubelet[2765]: I0913 00:08:03.413203 2765 scope.go:117] "RemoveContainer" containerID="e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4" Sep 13 00:08:03.414515 containerd[1583]: time="2025-09-13T00:08:03.414225028Z" level=error msg="ContainerStatus for \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\": not found" Sep 13 00:08:03.414567 kubelet[2765]: E0913 00:08:03.414404 2765 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4\": not found" containerID="e673a6f0464fcbb3d7f11ba1813c64aea4f4f930be6dcf038d2247b613efefb4" Sep 13 00:08:03.414567 kubelet[2765]: I0913 00:08:03.414429 2765 scope.go:117] "RemoveContainer" containerID="a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e" Sep 13 00:08:03.417210 containerd[1583]: time="2025-09-13T00:08:03.417148380Z" level=info msg="RemoveContainer for \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\" returns successfully" Sep 13 00:08:03.417455 kubelet[2765]: I0913 00:08:03.417423 2765 scope.go:117] "RemoveContainer" containerID="f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80" Sep 13 00:08:03.418235 containerd[1583]: time="2025-09-13T00:08:03.417825797Z" level=error msg="ContainerStatus for \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\": not found" Sep 13 00:08:03.418235 containerd[1583]: time="2025-09-13T00:08:03.417917722Z" level=info msg="RemoveContainer for \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\"" Sep 13 00:08:03.418349 kubelet[2765]: E0913 00:08:03.417992 2765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\": not found" containerID="f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80" Sep 13 00:08:03.418349 kubelet[2765]: I0913 00:08:03.418024 2765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80"} err="failed to get container status \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\": not found" Sep 13 00:08:03.418349 kubelet[2765]: I0913 00:08:03.418062 2765 scope.go:117] "RemoveContainer" containerID="a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e" Sep 13 00:08:03.419060 containerd[1583]: time="2025-09-13T00:08:03.418934874Z" level=info msg="RemoveContainer for \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\"" Sep 13 00:08:03.419060 containerd[1583]: time="2025-09-13T00:08:03.419013413Z" level=error msg="RemoveContainer for \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\" failed" error="failed to set removing state for container \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\": container is already in removing state" Sep 13 00:08:03.419254 kubelet[2765]: E0913 00:08:03.419166 2765 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\": container is already in removing state" containerID="a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e" Sep 13 00:08:03.419254 kubelet[2765]: I0913 00:08:03.419235 2765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e"} err="rpc error: code = Unknown desc = failed to set removing state for container \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\": container is already in removing state" Sep 13 00:08:03.419254 kubelet[2765]: I0913 00:08:03.419255 2765 scope.go:117] "RemoveContainer" containerID="4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9" Sep 13 00:08:03.419486 containerd[1583]: time="2025-09-13T00:08:03.419416197Z" level=error msg="ContainerStatus for \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\": not found" Sep 13 00:08:03.419631 kubelet[2765]: E0913 00:08:03.419541 2765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\": not found" containerID="4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9" Sep 13 00:08:03.419631 kubelet[2765]: I0913 00:08:03.419584 2765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9"} err="failed to get container status \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\": not found" Sep 13 00:08:03.419631 kubelet[2765]: I0913 00:08:03.419616 2765 scope.go:117] "RemoveContainer" containerID="0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e" Sep 13 00:08:03.419942 containerd[1583]: time="2025-09-13T00:08:03.419899486Z" level=error msg="ContainerStatus for \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\": not found" Sep 13 00:08:03.420068 kubelet[2765]: E0913 00:08:03.420025 2765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\": not found" containerID="0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e" Sep 13 00:08:03.420130 kubelet[2765]: I0913 00:08:03.420066 2765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e"} err="failed to get container status \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\": not found" Sep 13 00:08:03.420130 kubelet[2765]: I0913 00:08:03.420083 2765 scope.go:117] "RemoveContainer" containerID="f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80" Sep 13 00:08:03.420253 containerd[1583]: time="2025-09-13T00:08:03.420223241Z" level=error msg="ContainerStatus for \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\": not found" Sep 13 00:08:03.420475 kubelet[2765]: I0913 00:08:03.420434 2765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80"} err="failed to get container status \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8f51c35ccf285e1f6cc55c95f22f3fcee1a2c3d90f67dc66162ecde23771e80\": not found" Sep 13 00:08:03.420525 kubelet[2765]: I0913 00:08:03.420481 2765 scope.go:117] "RemoveContainer" containerID="a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e" Sep 13 00:08:03.421513 containerd[1583]: time="2025-09-13T00:08:03.421491259Z" level=info msg="RemoveContainer for \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\"" Sep 13 00:08:03.421631 containerd[1583]: time="2025-09-13T00:08:03.421573815Z" level=error msg="RemoveContainer for \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\" failed" error="failed to set removing state for container \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\": container is already in removing state" Sep 13 00:08:03.421757 kubelet[2765]: E0913 00:08:03.421726 2765 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\": container is already in removing state" containerID="a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e" Sep 13 00:08:03.421809 kubelet[2765]: I0913 00:08:03.421773 2765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e"} err="rpc error: code = Unknown desc = failed to set removing state for container \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\": container is already in removing state" Sep 13 00:08:03.421809 kubelet[2765]: I0913 00:08:03.421795 2765 scope.go:117] "RemoveContainer" containerID="4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9" Sep 13 00:08:03.421999 containerd[1583]: time="2025-09-13T00:08:03.421955861Z" level=error msg="ContainerStatus for \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\": not found" Sep 13 00:08:03.422177 kubelet[2765]: I0913 00:08:03.422112 2765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9"} err="failed to get container status \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"4639952cb5b1eb8e29a8be30b295150c90959aa4236ba1541385c7b72d7fa4b9\": not found" Sep 13 00:08:03.427475 containerd[1583]: time="2025-09-13T00:08:03.427386796Z" level=info msg="RemoveContainer for \"a4ce9c35cae2af21ebf873cba184e48b6e1f1326eb2520d64c9e1867ea79583e\" returns successfully" Sep 13 00:08:03.427894 kubelet[2765]: I0913 00:08:03.427750 2765 scope.go:117] "RemoveContainer" containerID="0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e" Sep 13 00:08:03.428239 containerd[1583]: time="2025-09-13T00:08:03.428196503Z" level=error msg="ContainerStatus for \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\": not found" Sep 13 00:08:03.428513 kubelet[2765]: E0913 00:08:03.428469 2765 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e\": not found" containerID="0c413de3db3796ff3a5d3421571d18924e6da8a24f053dfe63230f9cded45f6e" Sep 13 00:08:03.430018 containerd[1583]: time="2025-09-13T00:08:03.429974008Z" level=info msg="StopPodSandbox for \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\"" Sep 13 00:08:03.430183 containerd[1583]: time="2025-09-13T00:08:03.430144282Z" level=info msg="TearDown network for sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" successfully" Sep 13 00:08:03.430234 containerd[1583]: time="2025-09-13T00:08:03.430182414Z" level=info msg="StopPodSandbox for \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" returns successfully" Sep 13 00:08:03.430747 containerd[1583]: time="2025-09-13T00:08:03.430717611Z" level=info msg="RemovePodSandbox for \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\"" Sep 13 00:08:03.430789 containerd[1583]: time="2025-09-13T00:08:03.430753239Z" level=info msg="Forcibly stopping sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\"" Sep 13 00:08:03.430842 containerd[1583]: time="2025-09-13T00:08:03.430811519Z" level=info msg="TearDown network for sandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" successfully" Sep 13 00:08:03.438647 containerd[1583]: time="2025-09-13T00:08:03.438546469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:03.438840 containerd[1583]: time="2025-09-13T00:08:03.438685874Z" level=info msg="RemovePodSandbox \"2655b9728d4c51e416661c944099b274a03c5e63f144ee7323a92e0ce7a6c892\" returns successfully" Sep 13 00:08:03.439557 containerd[1583]: time="2025-09-13T00:08:03.439504549Z" level=info msg="StopPodSandbox for \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\"" Sep 13 00:08:03.439724 containerd[1583]: time="2025-09-13T00:08:03.439695411Z" level=info msg="TearDown network for sandbox \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\" successfully" Sep 13 00:08:03.439724 containerd[1583]: time="2025-09-13T00:08:03.439717724Z" level=info msg="StopPodSandbox for \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\" returns successfully" Sep 13 00:08:03.440273 containerd[1583]: time="2025-09-13T00:08:03.440221781Z" level=info msg="RemovePodSandbox for \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\"" Sep 13 00:08:03.440330 containerd[1583]: time="2025-09-13T00:08:03.440283728Z" level=info msg="Forcibly stopping sandbox \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\"" Sep 13 00:08:03.440451 containerd[1583]: time="2025-09-13T00:08:03.440420158Z" level=info msg="TearDown network for sandbox \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\" successfully" Sep 13 00:08:03.447252 containerd[1583]: time="2025-09-13T00:08:03.447157574Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:03.447252 containerd[1583]: time="2025-09-13T00:08:03.447256191Z" level=info msg="RemovePodSandbox \"7f0c7d2ce0ef667c889e00c465488e52a1e07718e5a0356b32f10802b4da8c88\" returns successfully" Sep 13 00:08:03.677798 sshd[4650]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:03.685913 systemd[1]: Started sshd@32-10.0.0.70:22-10.0.0.1:44716.service - OpenSSH per-connection server daemon (10.0.0.1:44716). Sep 13 00:08:03.687149 systemd[1]: sshd@31-10.0.0.70:22-10.0.0.1:44706.service: Deactivated successfully. Sep 13 00:08:03.690174 systemd[1]: session-32.scope: Deactivated successfully. Sep 13 00:08:03.692510 systemd-logind[1563]: Session 32 logged out. Waiting for processes to exit. Sep 13 00:08:03.694574 systemd-logind[1563]: Removed session 32. Sep 13 00:08:03.725913 sshd[4818]: Accepted publickey for core from 10.0.0.1 port 44716 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:08:03.727791 sshd[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:03.732900 systemd-logind[1563]: New session 33 of user core. Sep 13 00:08:03.740889 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 13 00:08:04.236996 kubelet[2765]: I0913 00:08:04.236946 2765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="283ba863-3c4a-4b64-8c59-5547b598240b" path="/var/lib/kubelet/pods/283ba863-3c4a-4b64-8c59-5547b598240b/volumes" Sep 13 00:08:04.237863 kubelet[2765]: I0913 00:08:04.237839 2765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b569177-58c3-45bc-aa33-16e435816c8f" path="/var/lib/kubelet/pods/5b569177-58c3-45bc-aa33-16e435816c8f/volumes" Sep 13 00:08:05.017346 sshd[4818]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:05.026868 systemd[1]: Started sshd@33-10.0.0.70:22-10.0.0.1:44732.service - OpenSSH per-connection server daemon (10.0.0.1:44732). Sep 13 00:08:05.027622 systemd[1]: sshd@32-10.0.0.70:22-10.0.0.1:44716.service: Deactivated successfully. Sep 13 00:08:05.039405 systemd[1]: session-33.scope: Deactivated successfully. Sep 13 00:08:05.043068 systemd-logind[1563]: Session 33 logged out. Waiting for processes to exit. Sep 13 00:08:05.046129 systemd-logind[1563]: Removed session 33. Sep 13 00:08:05.059138 kubelet[2765]: E0913 00:08:05.059065 2765 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="283ba863-3c4a-4b64-8c59-5547b598240b" containerName="cilium-agent" Sep 13 00:08:05.059138 kubelet[2765]: E0913 00:08:05.059107 2765 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="283ba863-3c4a-4b64-8c59-5547b598240b" containerName="mount-bpf-fs" Sep 13 00:08:05.059138 kubelet[2765]: E0913 00:08:05.059119 2765 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="283ba863-3c4a-4b64-8c59-5547b598240b" containerName="clean-cilium-state" Sep 13 00:08:05.059138 kubelet[2765]: E0913 00:08:05.059128 2765 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b569177-58c3-45bc-aa33-16e435816c8f" containerName="cilium-operator" Sep 13 00:08:05.059138 kubelet[2765]: E0913 00:08:05.059143 2765 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="283ba863-3c4a-4b64-8c59-5547b598240b" containerName="mount-cgroup" Sep 13 00:08:05.059138 kubelet[2765]: E0913 00:08:05.059153 2765 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="283ba863-3c4a-4b64-8c59-5547b598240b" containerName="apply-sysctl-overwrites" Sep 13 00:08:05.061264 kubelet[2765]: I0913 00:08:05.059183 2765 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b569177-58c3-45bc-aa33-16e435816c8f" containerName="cilium-operator" Sep 13 00:08:05.061264 kubelet[2765]: I0913 00:08:05.059194 2765 memory_manager.go:354] "RemoveStaleState removing state" podUID="283ba863-3c4a-4b64-8c59-5547b598240b" containerName="cilium-agent" Sep 13 00:08:05.074664 sshd[4832]: Accepted publickey for core from 10.0.0.1 port 44732 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:08:05.077079 sshd[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:05.085485 systemd-logind[1563]: New session 34 of user core. Sep 13 00:08:05.092363 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 13 00:08:05.149588 sshd[4832]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:05.159841 systemd[1]: Started sshd@34-10.0.0.70:22-10.0.0.1:44746.service - OpenSSH per-connection server daemon (10.0.0.1:44746). Sep 13 00:08:05.160370 systemd[1]: sshd@33-10.0.0.70:22-10.0.0.1:44732.service: Deactivated successfully. Sep 13 00:08:05.164676 systemd-logind[1563]: Session 34 logged out. Waiting for processes to exit. Sep 13 00:08:05.165812 systemd[1]: session-34.scope: Deactivated successfully. Sep 13 00:08:05.166802 systemd-logind[1563]: Removed session 34. Sep 13 00:08:05.195372 sshd[4841]: Accepted publickey for core from 10.0.0.1 port 44746 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:08:05.197363 sshd[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:05.202261 systemd-logind[1563]: New session 35 of user core. Sep 13 00:08:05.210112 kubelet[2765]: I0913 00:08:05.210065 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90c43331-14e5-444b-bcd8-4520f10be94d-cilium-ipsec-secrets\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210112 kubelet[2765]: I0913 00:08:05.210115 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90c43331-14e5-444b-bcd8-4520f10be94d-hubble-tls\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210288 kubelet[2765]: I0913 00:08:05.210138 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90c43331-14e5-444b-bcd8-4520f10be94d-cilium-config-path\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210288 kubelet[2765]: I0913 00:08:05.210157 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90c43331-14e5-444b-bcd8-4520f10be94d-cilium-cgroup\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210288 kubelet[2765]: I0913 00:08:05.210172 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90c43331-14e5-444b-bcd8-4520f10be94d-hostproc\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210288 kubelet[2765]: I0913 00:08:05.210186 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90c43331-14e5-444b-bcd8-4520f10be94d-lib-modules\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210288 kubelet[2765]: I0913 00:08:05.210256 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjrtw\" (UniqueName: \"kubernetes.io/projected/90c43331-14e5-444b-bcd8-4520f10be94d-kube-api-access-kjrtw\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210529 kubelet[2765]: I0913 00:08:05.210307 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90c43331-14e5-444b-bcd8-4520f10be94d-cni-path\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210529 kubelet[2765]: I0913 00:08:05.210334 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90c43331-14e5-444b-bcd8-4520f10be94d-etc-cni-netd\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210529 kubelet[2765]: I0913 00:08:05.210351 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90c43331-14e5-444b-bcd8-4520f10be94d-xtables-lock\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210529 kubelet[2765]: I0913 00:08:05.210378 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90c43331-14e5-444b-bcd8-4520f10be94d-clustermesh-secrets\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210529 kubelet[2765]: I0913 00:08:05.210394 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90c43331-14e5-444b-bcd8-4520f10be94d-host-proc-sys-net\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210529 kubelet[2765]: I0913 00:08:05.210414 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90c43331-14e5-444b-bcd8-4520f10be94d-cilium-run\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210737 kubelet[2765]: I0913 00:08:05.210427 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90c43331-14e5-444b-bcd8-4520f10be94d-bpf-maps\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.210737 kubelet[2765]: I0913 00:08:05.210442 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90c43331-14e5-444b-bcd8-4520f10be94d-host-proc-sys-kernel\") pod \"cilium-dn5qr\" (UID: \"90c43331-14e5-444b-bcd8-4520f10be94d\") " pod="kube-system/cilium-dn5qr" Sep 13 00:08:05.216947 systemd[1]: Started session-35.scope - Session 35 of User core. Sep 13 00:08:05.371712 kubelet[2765]: E0913 00:08:05.371647 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:05.372364 containerd[1583]: time="2025-09-13T00:08:05.372316768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dn5qr,Uid:90c43331-14e5-444b-bcd8-4520f10be94d,Namespace:kube-system,Attempt:0,}" Sep 13 00:08:05.398120 containerd[1583]: time="2025-09-13T00:08:05.397970044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:05.398120 containerd[1583]: time="2025-09-13T00:08:05.398023705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:05.398120 containerd[1583]: time="2025-09-13T00:08:05.398047010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:05.398365 containerd[1583]: time="2025-09-13T00:08:05.398181916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:05.445321 containerd[1583]: time="2025-09-13T00:08:05.445262107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dn5qr,Uid:90c43331-14e5-444b-bcd8-4520f10be94d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e741a5783af10b21c3212b1cf27e8999789eebd89aa75667e298afc26acf533b\"" Sep 13 00:08:05.446161 kubelet[2765]: E0913 00:08:05.446134 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:05.448267 containerd[1583]: time="2025-09-13T00:08:05.448227681Z" level=info msg="CreateContainer within sandbox \"e741a5783af10b21c3212b1cf27e8999789eebd89aa75667e298afc26acf533b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:08:05.469295 containerd[1583]: time="2025-09-13T00:08:05.469216925Z" level=info msg="CreateContainer within sandbox \"e741a5783af10b21c3212b1cf27e8999789eebd89aa75667e298afc26acf533b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"63c8c8609e23942be9ee3a6329a99efac91cccbba991e1552bedab48fbb91973\"" Sep 13 00:08:05.470069 containerd[1583]: time="2025-09-13T00:08:05.470021544Z" level=info msg="StartContainer for \"63c8c8609e23942be9ee3a6329a99efac91cccbba991e1552bedab48fbb91973\"" Sep 13 00:08:05.702284 containerd[1583]: time="2025-09-13T00:08:05.702055478Z" level=info msg="StartContainer for \"63c8c8609e23942be9ee3a6329a99efac91cccbba991e1552bedab48fbb91973\" returns successfully" Sep 13 00:08:05.797156 kubelet[2765]: E0913 00:08:05.797112 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:05.951212 containerd[1583]: time="2025-09-13T00:08:05.951120744Z" level=info msg="shim disconnected" id=63c8c8609e23942be9ee3a6329a99efac91cccbba991e1552bedab48fbb91973 namespace=k8s.io Sep 13 00:08:05.951212 containerd[1583]: time="2025-09-13T00:08:05.951197590Z" level=warning msg="cleaning up after shim disconnected" id=63c8c8609e23942be9ee3a6329a99efac91cccbba991e1552bedab48fbb91973 namespace=k8s.io Sep 13 00:08:05.951212 containerd[1583]: time="2025-09-13T00:08:05.951209743Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:05.967726 containerd[1583]: time="2025-09-13T00:08:05.967513633Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:08:06.286791 kubelet[2765]: I0913 00:08:06.286548 2765 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:08:06Z","lastTransitionTime":"2025-09-13T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:08:06.801636 kubelet[2765]: E0913 00:08:06.801570 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:06.805004 containerd[1583]: time="2025-09-13T00:08:06.804884295Z" level=info msg="CreateContainer within sandbox \"e741a5783af10b21c3212b1cf27e8999789eebd89aa75667e298afc26acf533b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:08:06.829108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2759380789.mount: Deactivated successfully. Sep 13 00:08:06.843386 containerd[1583]: time="2025-09-13T00:08:06.843297475Z" level=info msg="CreateContainer within sandbox \"e741a5783af10b21c3212b1cf27e8999789eebd89aa75667e298afc26acf533b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d6643b6cafb0eadeb9a2ec87c09c14041ebc18561ba26a738d3e3741148109d5\"" Sep 13 00:08:06.844110 containerd[1583]: time="2025-09-13T00:08:06.844078891Z" level=info msg="StartContainer for \"d6643b6cafb0eadeb9a2ec87c09c14041ebc18561ba26a738d3e3741148109d5\"" Sep 13 00:08:06.915829 containerd[1583]: time="2025-09-13T00:08:06.915735885Z" level=info msg="StartContainer for \"d6643b6cafb0eadeb9a2ec87c09c14041ebc18561ba26a738d3e3741148109d5\" returns successfully" Sep 13 00:08:06.959284 containerd[1583]: time="2025-09-13T00:08:06.959185378Z" level=info msg="shim disconnected" id=d6643b6cafb0eadeb9a2ec87c09c14041ebc18561ba26a738d3e3741148109d5 namespace=k8s.io Sep 13 00:08:06.959284 containerd[1583]: time="2025-09-13T00:08:06.959257144Z" level=warning msg="cleaning up after shim disconnected" id=d6643b6cafb0eadeb9a2ec87c09c14041ebc18561ba26a738d3e3741148109d5 namespace=k8s.io Sep 13 00:08:06.959284 containerd[1583]: time="2025-09-13T00:08:06.959269718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:07.323557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6643b6cafb0eadeb9a2ec87c09c14041ebc18561ba26a738d3e3741148109d5-rootfs.mount: Deactivated successfully. Sep 13 00:08:07.326609 kubelet[2765]: E0913 00:08:07.326556 2765 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:08:07.808477 kubelet[2765]: E0913 00:08:07.808423 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:07.811355 containerd[1583]: time="2025-09-13T00:08:07.811189489Z" level=info msg="CreateContainer within sandbox \"e741a5783af10b21c3212b1cf27e8999789eebd89aa75667e298afc26acf533b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:08:07.892983 containerd[1583]: time="2025-09-13T00:08:07.892903336Z" level=info msg="CreateContainer within sandbox \"e741a5783af10b21c3212b1cf27e8999789eebd89aa75667e298afc26acf533b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"38a56253cc6d114ecc7fb8d201c4a6205c6feed30b67ec68ea7c95bed59a519e\"" Sep 13 00:08:07.893736 containerd[1583]: time="2025-09-13T00:08:07.893707105Z" level=info msg="StartContainer for \"38a56253cc6d114ecc7fb8d201c4a6205c6feed30b67ec68ea7c95bed59a519e\"" Sep 13 00:08:07.970570 containerd[1583]: time="2025-09-13T00:08:07.970517651Z" level=info msg="StartContainer for \"38a56253cc6d114ecc7fb8d201c4a6205c6feed30b67ec68ea7c95bed59a519e\" returns successfully" Sep 13 00:08:08.025579 containerd[1583]: time="2025-09-13T00:08:08.025451328Z" level=info msg="shim disconnected" id=38a56253cc6d114ecc7fb8d201c4a6205c6feed30b67ec68ea7c95bed59a519e namespace=k8s.io Sep 13 00:08:08.025579 containerd[1583]: time="2025-09-13T00:08:08.025546910Z" level=warning msg="cleaning up after shim disconnected" id=38a56253cc6d114ecc7fb8d201c4a6205c6feed30b67ec68ea7c95bed59a519e namespace=k8s.io Sep 13 00:08:08.025579 containerd[1583]: time="2025-09-13T00:08:08.025564533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:08.323089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38a56253cc6d114ecc7fb8d201c4a6205c6feed30b67ec68ea7c95bed59a519e-rootfs.mount: Deactivated successfully. Sep 13 00:08:08.814126 kubelet[2765]: E0913 00:08:08.814078 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:08.816140 containerd[1583]: time="2025-09-13T00:08:08.816107191Z" level=info msg="CreateContainer within sandbox \"e741a5783af10b21c3212b1cf27e8999789eebd89aa75667e298afc26acf533b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:08:08.833725 containerd[1583]: time="2025-09-13T00:08:08.832443495Z" level=info msg="CreateContainer within sandbox \"e741a5783af10b21c3212b1cf27e8999789eebd89aa75667e298afc26acf533b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3e6de2314fb6d724d427667ec741eda2a05a31e6ff1dfe117a86fb6a37ec6a5e\"" Sep 13 00:08:08.833725 containerd[1583]: time="2025-09-13T00:08:08.833464517Z" level=info msg="StartContainer for \"3e6de2314fb6d724d427667ec741eda2a05a31e6ff1dfe117a86fb6a37ec6a5e\"" Sep 13 00:08:08.898169 containerd[1583]: time="2025-09-13T00:08:08.898099917Z" level=info msg="StartContainer for \"3e6de2314fb6d724d427667ec741eda2a05a31e6ff1dfe117a86fb6a37ec6a5e\" returns successfully" Sep 13 00:08:09.034483 containerd[1583]: time="2025-09-13T00:08:09.034404376Z" level=info msg="shim disconnected" id=3e6de2314fb6d724d427667ec741eda2a05a31e6ff1dfe117a86fb6a37ec6a5e namespace=k8s.io Sep 13 00:08:09.034483 containerd[1583]: time="2025-09-13T00:08:09.034473999Z" level=warning msg="cleaning up after shim disconnected" id=3e6de2314fb6d724d427667ec741eda2a05a31e6ff1dfe117a86fb6a37ec6a5e namespace=k8s.io Sep 13 00:08:09.034483 containerd[1583]: time="2025-09-13T00:08:09.034486151Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:09.323266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e6de2314fb6d724d427667ec741eda2a05a31e6ff1dfe117a86fb6a37ec6a5e-rootfs.mount: Deactivated successfully. Sep 13 00:08:09.818693 kubelet[2765]: E0913 00:08:09.818639 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:09.821474 containerd[1583]: time="2025-09-13T00:08:09.821407483Z" level=info msg="CreateContainer within sandbox \"e741a5783af10b21c3212b1cf27e8999789eebd89aa75667e298afc26acf533b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:08:09.955315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3399795607.mount: Deactivated successfully. Sep 13 00:08:09.971773 containerd[1583]: time="2025-09-13T00:08:09.971690510Z" level=info msg="CreateContainer within sandbox \"e741a5783af10b21c3212b1cf27e8999789eebd89aa75667e298afc26acf533b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"876a89f7f1774cb415fe8dcdc09811bc0e9b0f2103e177b8916af82f0ceeb8ab\"" Sep 13 00:08:09.972525 containerd[1583]: time="2025-09-13T00:08:09.972409688Z" level=info msg="StartContainer for \"876a89f7f1774cb415fe8dcdc09811bc0e9b0f2103e177b8916af82f0ceeb8ab\"" Sep 13 00:08:10.045841 containerd[1583]: time="2025-09-13T00:08:10.045773980Z" level=info msg="StartContainer for \"876a89f7f1774cb415fe8dcdc09811bc0e9b0f2103e177b8916af82f0ceeb8ab\" returns successfully" Sep 13 00:08:10.581631 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:08:10.833753 kubelet[2765]: E0913 00:08:10.829003 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:10.850638 kubelet[2765]: I0913 00:08:10.849978 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dn5qr" podStartSLOduration=5.849957389 podStartE2EDuration="5.849957389s" podCreationTimestamp="2025-09-13 00:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:08:10.848398032 +0000 UTC m=+188.701298908" watchObservedRunningTime="2025-09-13 00:08:10.849957389 +0000 UTC m=+188.702858265" Sep 13 00:08:11.828575 kubelet[2765]: E0913 00:08:11.828516 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:14.262237 systemd-networkd[1250]: lxc_health: Link UP Sep 13 00:08:14.279201 systemd-networkd[1250]: lxc_health: Gained carrier Sep 13 00:08:15.374356 kubelet[2765]: E0913 00:08:15.374289 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:15.838453 kubelet[2765]: E0913 00:08:15.838259 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:15.954954 systemd-networkd[1250]: lxc_health: Gained IPv6LL Sep 13 00:08:16.840995 kubelet[2765]: E0913 00:08:16.840932 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:17.817942 systemd[1]: run-containerd-runc-k8s.io-876a89f7f1774cb415fe8dcdc09811bc0e9b0f2103e177b8916af82f0ceeb8ab-runc.zivxC7.mount: Deactivated successfully. Sep 13 00:08:20.475073 sshd[4841]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:20.479261 systemd[1]: sshd@34-10.0.0.70:22-10.0.0.1:44746.service: Deactivated successfully. Sep 13 00:08:20.481560 systemd-logind[1563]: Session 35 logged out. Waiting for processes to exit. Sep 13 00:08:20.481725 systemd[1]: session-35.scope: Deactivated successfully. Sep 13 00:08:20.482984 systemd-logind[1563]: Removed session 35.