Sep 12 17:40:33.035453 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:40:33.035513 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:40:33.035532 kernel: BIOS-provided physical RAM map: Sep 12 17:40:33.035539 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:40:33.035545 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 17:40:33.035551 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 17:40:33.035559 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 17:40:33.035566 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 17:40:33.035572 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 12 17:40:33.035579 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 12 17:40:33.035588 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 12 17:40:33.035595 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 12 17:40:33.035604 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 12 17:40:33.035611 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 12 17:40:33.035621 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 12 17:40:33.035628 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 17:40:33.035638 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 12 17:40:33.035645 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 12 17:40:33.035652 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 17:40:33.035659 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 17:40:33.035666 kernel: NX (Execute Disable) protection: active Sep 12 17:40:33.035673 kernel: APIC: Static calls initialized Sep 12 17:40:33.035680 kernel: efi: EFI v2.7 by EDK II Sep 12 17:40:33.035687 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Sep 12 17:40:33.035694 kernel: SMBIOS 2.8 present. Sep 12 17:40:33.035701 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 12 17:40:33.035708 kernel: Hypervisor detected: KVM Sep 12 17:40:33.035717 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:40:33.035724 kernel: kvm-clock: using sched offset of 6073216287 cycles Sep 12 17:40:33.035732 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:40:33.035739 kernel: tsc: Detected 2794.748 MHz processor Sep 12 17:40:33.035746 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:40:33.035754 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:40:33.035761 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 12 17:40:33.035769 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:40:33.035776 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:40:33.035786 kernel: Using GB pages for direct mapping Sep 12 17:40:33.035793 kernel: Secure boot disabled Sep 12 17:40:33.035800 kernel: ACPI: Early table checksum verification disabled Sep 12 17:40:33.035808 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 12 17:40:33.035819 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:40:33.035826 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:33.035834 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:33.035844 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 12 17:40:33.035851 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:33.035861 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:33.035869 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:33.035876 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:33.035884 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 17:40:33.035891 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 12 17:40:33.035902 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 12 17:40:33.035909 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 12 17:40:33.035916 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 12 17:40:33.035924 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 12 17:40:33.035931 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 12 17:40:33.035939 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 12 17:40:33.035946 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 12 17:40:33.035953 kernel: No NUMA configuration found Sep 12 17:40:33.035963 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 12 17:40:33.035974 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 12 17:40:33.035981 kernel: Zone ranges: Sep 12 17:40:33.035989 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:40:33.035996 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 12 17:40:33.036003 kernel: Normal empty Sep 12 17:40:33.036011 kernel: Movable zone start for each node Sep 12 17:40:33.036018 kernel: Early memory node ranges Sep 12 17:40:33.036026 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:40:33.036033 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 12 17:40:33.036040 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 12 17:40:33.036051 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 12 17:40:33.036058 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 12 17:40:33.036066 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 12 17:40:33.036075 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 12 17:40:33.036083 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:40:33.036090 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:40:33.036097 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 12 17:40:33.036105 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:40:33.036112 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 12 17:40:33.036122 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 17:40:33.036130 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 12 17:40:33.036137 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:40:33.036145 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:40:33.036152 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:40:33.036160 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:40:33.036167 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:40:33.036174 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:40:33.036182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:40:33.036239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:40:33.036246 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:40:33.036254 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:40:33.036261 kernel: TSC deadline timer available Sep 12 17:40:33.036269 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 17:40:33.036276 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:40:33.036284 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 17:40:33.036291 kernel: kvm-guest: setup PV sched yield Sep 12 17:40:33.036298 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 12 17:40:33.036308 kernel: Booting paravirtualized kernel on KVM Sep 12 17:40:33.036316 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:40:33.036324 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 17:40:33.036331 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 17:40:33.036339 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 17:40:33.036346 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 17:40:33.036353 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:40:33.036370 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:40:33.036380 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:40:33.036393 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:40:33.036401 kernel: random: crng init done Sep 12 17:40:33.036409 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:40:33.036416 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:40:33.036424 kernel: Fallback order for Node 0: 0 Sep 12 17:40:33.036431 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 12 17:40:33.036438 kernel: Policy zone: DMA32 Sep 12 17:40:33.036446 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:40:33.036456 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 171128K reserved, 0K cma-reserved) Sep 12 17:40:33.036464 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:40:33.036471 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:40:33.036479 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:40:33.036486 kernel: Dynamic Preempt: voluntary Sep 12 17:40:33.036503 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:40:33.036517 kernel: rcu: RCU event tracing is enabled. Sep 12 17:40:33.036525 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:40:33.036532 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:40:33.036545 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:40:33.036553 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:40:33.036564 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:40:33.036583 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:40:33.036598 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 17:40:33.036614 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:40:33.036628 kernel: Console: colour dummy device 80x25 Sep 12 17:40:33.036643 kernel: printk: console [ttyS0] enabled Sep 12 17:40:33.036665 kernel: ACPI: Core revision 20230628 Sep 12 17:40:33.036679 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:40:33.036694 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:40:33.036709 kernel: x2apic enabled Sep 12 17:40:33.036724 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:40:33.036739 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 17:40:33.036754 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 17:40:33.036762 kernel: kvm-guest: setup PV IPIs Sep 12 17:40:33.036770 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:40:33.036780 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 17:40:33.036788 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 17:40:33.036796 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:40:33.036804 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:40:33.036811 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:40:33.036819 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:40:33.036827 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:40:33.036835 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:40:33.036843 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:40:33.036856 kernel: active return thunk: retbleed_return_thunk Sep 12 17:40:33.036864 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:40:33.036872 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:40:33.036880 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:40:33.036891 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 17:40:33.036899 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 17:40:33.036907 kernel: active return thunk: srso_return_thunk Sep 12 17:40:33.036915 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 17:40:33.036926 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:40:33.036934 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:40:33.036949 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:40:33.036957 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:40:33.036965 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:40:33.036973 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:40:33.036981 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:40:33.036989 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:40:33.036996 kernel: landlock: Up and running. Sep 12 17:40:33.037059 kernel: SELinux: Initializing. Sep 12 17:40:33.037068 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:40:33.037076 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:40:33.037084 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:40:33.037100 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:40:33.037110 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:40:33.037118 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:40:33.037126 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:40:33.037133 kernel: ... version: 0 Sep 12 17:40:33.037145 kernel: ... bit width: 48 Sep 12 17:40:33.037153 kernel: ... generic registers: 6 Sep 12 17:40:33.037161 kernel: ... value mask: 0000ffffffffffff Sep 12 17:40:33.037169 kernel: ... max period: 00007fffffffffff Sep 12 17:40:33.037176 kernel: ... fixed-purpose events: 0 Sep 12 17:40:33.037184 kernel: ... event mask: 000000000000003f Sep 12 17:40:33.037211 kernel: signal: max sigframe size: 1776 Sep 12 17:40:33.037218 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:40:33.037226 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:40:33.037238 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:40:33.037245 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:40:33.037253 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 17:40:33.037261 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:40:33.037269 kernel: smpboot: Max logical packages: 1 Sep 12 17:40:33.037276 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 17:40:33.037284 kernel: devtmpfs: initialized Sep 12 17:40:33.037292 kernel: x86/mm: Memory block size: 128MB Sep 12 17:40:33.037299 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 12 17:40:33.037310 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 12 17:40:33.037318 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 12 17:40:33.037326 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 12 17:40:33.037334 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 12 17:40:33.037342 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:40:33.037350 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:40:33.037358 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:40:33.037378 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:40:33.037386 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:40:33.037397 kernel: audit: type=2000 audit(1757698831.568:1): state=initialized audit_enabled=0 res=1 Sep 12 17:40:33.037405 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:40:33.037413 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:40:33.037420 kernel: cpuidle: using governor menu Sep 12 17:40:33.037428 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:40:33.037436 kernel: dca service started, version 1.12.1 Sep 12 17:40:33.037444 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 17:40:33.037451 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 17:40:33.037459 kernel: PCI: Using configuration type 1 for base access Sep 12 17:40:33.037470 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:40:33.037478 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:40:33.037486 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:40:33.037493 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:40:33.037501 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:40:33.037509 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:40:33.037517 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:40:33.037524 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:40:33.037532 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:40:33.037545 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:40:33.037554 kernel: ACPI: Interpreter enabled Sep 12 17:40:33.037564 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:40:33.037574 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:40:33.037584 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:40:33.037593 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:40:33.037601 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:40:33.037609 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:40:33.037921 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:40:33.038093 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:40:33.038299 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:40:33.038318 kernel: PCI host bridge to bus 0000:00 Sep 12 17:40:33.038585 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:40:33.038827 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:40:33.038990 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:40:33.039150 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 17:40:33.039337 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 17:40:33.039495 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 12 17:40:33.039625 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:40:33.039791 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 17:40:33.039991 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 17:40:33.040167 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 12 17:40:33.040386 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 12 17:40:33.040548 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 12 17:40:33.040682 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 12 17:40:33.040843 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:40:33.041031 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:40:33.041201 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 12 17:40:33.041345 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 12 17:40:33.041504 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 12 17:40:33.041730 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:40:33.041907 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 12 17:40:33.042065 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 12 17:40:33.042252 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 12 17:40:33.042518 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:40:33.042821 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 12 17:40:33.042991 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 12 17:40:33.043156 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 12 17:40:33.044631 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 12 17:40:33.044814 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 17:40:33.044949 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:40:33.045123 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 17:40:33.045317 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 12 17:40:33.045488 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 12 17:40:33.045657 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 17:40:33.045817 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 12 17:40:33.045836 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:40:33.045847 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:40:33.045858 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:40:33.045877 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:40:33.045888 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:40:33.045898 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:40:33.045909 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:40:33.045920 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:40:33.045931 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:40:33.045942 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:40:33.045953 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:40:33.045964 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:40:33.045979 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:40:33.045987 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:40:33.045994 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:40:33.046002 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:40:33.046010 kernel: iommu: Default domain type: Translated Sep 12 17:40:33.046018 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:40:33.046026 kernel: efivars: Registered efivars operations Sep 12 17:40:33.046034 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:40:33.046042 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:40:33.046053 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 12 17:40:33.046060 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 12 17:40:33.046068 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 12 17:40:33.046076 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 12 17:40:33.046339 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:40:33.046485 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:40:33.046616 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:40:33.046626 kernel: vgaarb: loaded Sep 12 17:40:33.046635 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:40:33.046648 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:40:33.046657 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:40:33.046664 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:40:33.046673 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:40:33.046681 kernel: pnp: PnP ACPI init Sep 12 17:40:33.046870 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 17:40:33.046883 kernel: pnp: PnP ACPI: found 6 devices Sep 12 17:40:33.046892 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:40:33.046903 kernel: NET: Registered PF_INET protocol family Sep 12 17:40:33.046911 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:40:33.046919 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:40:33.046928 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:40:33.046936 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:40:33.046944 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:40:33.046952 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:40:33.046960 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:40:33.046968 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:40:33.046979 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:40:33.046987 kernel: NET: Registered PF_XDP protocol family Sep 12 17:40:33.047154 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 12 17:40:33.047311 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 12 17:40:33.047451 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:40:33.047599 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:40:33.047733 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:40:33.047852 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 17:40:33.047977 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 17:40:33.048097 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 12 17:40:33.048108 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:40:33.048116 kernel: Initialise system trusted keyrings Sep 12 17:40:33.048124 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:40:33.048132 kernel: Key type asymmetric registered Sep 12 17:40:33.048140 kernel: Asymmetric key parser 'x509' registered Sep 12 17:40:33.048148 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:40:33.048156 kernel: io scheduler mq-deadline registered Sep 12 17:40:33.048168 kernel: io scheduler kyber registered Sep 12 17:40:33.048175 kernel: io scheduler bfq registered Sep 12 17:40:33.048236 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:40:33.048246 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:40:33.048255 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:40:33.048263 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 17:40:33.048271 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:40:33.048279 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:40:33.048287 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:40:33.048298 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:40:33.048306 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:40:33.048314 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:40:33.048481 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 17:40:33.048608 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 17:40:33.048762 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T17:40:32 UTC (1757698832) Sep 12 17:40:33.048887 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 17:40:33.049016 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:40:33.049030 kernel: efifb: probing for efifb Sep 12 17:40:33.049038 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 12 17:40:33.049045 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 12 17:40:33.049053 kernel: efifb: scrolling: redraw Sep 12 17:40:33.049061 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 12 17:40:33.049069 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 17:40:33.049097 kernel: fb0: EFI VGA frame buffer device Sep 12 17:40:33.049108 kernel: pstore: Using crash dump compression: deflate Sep 12 17:40:33.049116 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:40:33.049127 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:40:33.049135 kernel: Segment Routing with IPv6 Sep 12 17:40:33.049143 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:40:33.049151 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:40:33.049159 kernel: Key type dns_resolver registered Sep 12 17:40:33.049167 kernel: IPI shorthand broadcast: enabled Sep 12 17:40:33.049176 kernel: sched_clock: Marking stable (1503005453, 139518722)->(1679433367, -36909192) Sep 12 17:40:33.049199 kernel: registered taskstats version 1 Sep 12 17:40:33.049208 kernel: Loading compiled-in X.509 certificates Sep 12 17:40:33.049220 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:40:33.049228 kernel: Key type .fscrypt registered Sep 12 17:40:33.049236 kernel: Key type fscrypt-provisioning registered Sep 12 17:40:33.049244 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:40:33.049252 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:40:33.049261 kernel: ima: No architecture policies found Sep 12 17:40:33.049269 kernel: clk: Disabling unused clocks Sep 12 17:40:33.049277 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:40:33.049288 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:40:33.049296 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:40:33.049304 kernel: Run /init as init process Sep 12 17:40:33.049312 kernel: with arguments: Sep 12 17:40:33.049320 kernel: /init Sep 12 17:40:33.049328 kernel: with environment: Sep 12 17:40:33.049335 kernel: HOME=/ Sep 12 17:40:33.049343 kernel: TERM=linux Sep 12 17:40:33.049351 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:40:33.049375 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:40:33.049386 systemd[1]: Detected virtualization kvm. Sep 12 17:40:33.049394 systemd[1]: Detected architecture x86-64. Sep 12 17:40:33.049404 systemd[1]: Running in initrd. Sep 12 17:40:33.049418 systemd[1]: No hostname configured, using default hostname. Sep 12 17:40:33.049426 systemd[1]: Hostname set to . Sep 12 17:40:33.049436 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:40:33.049447 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:40:33.049458 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:40:33.049470 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:40:33.049483 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:40:33.049495 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:40:33.049509 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:40:33.049518 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:40:33.049528 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:40:33.049537 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:40:33.049546 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:40:33.049555 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:40:33.049563 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:40:33.049575 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:40:33.049585 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:40:33.049597 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:40:33.049608 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:40:33.049620 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:40:33.049631 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:40:33.049639 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:40:33.049648 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:40:33.049656 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:40:33.049668 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:40:33.049677 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:40:33.049686 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:40:33.049694 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:40:33.049703 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:40:33.049711 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:40:33.049720 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:40:33.049729 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:40:33.049740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:33.049749 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:40:33.049757 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:40:33.049766 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:40:33.049806 systemd-journald[193]: Collecting audit messages is disabled. Sep 12 17:40:33.049832 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:40:33.049841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:33.049850 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:33.049858 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:40:33.049870 systemd-journald[193]: Journal started Sep 12 17:40:33.049889 systemd-journald[193]: Runtime Journal (/run/log/journal/f5cc9b3cb0df47aa8b1d5a4a44a92f6b) is 6.0M, max 48.3M, 42.2M free. Sep 12 17:40:33.037215 systemd-modules-load[194]: Inserted module 'overlay' Sep 12 17:40:33.054265 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:40:33.067057 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:40:33.072382 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:40:33.079235 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:40:33.081518 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 12 17:40:33.084320 kernel: Bridge firewalling registered Sep 12 17:40:33.083073 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:40:33.086511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:33.089951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:40:33.106631 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:40:33.110596 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:40:33.111242 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:40:33.128177 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:40:33.134255 dracut-cmdline[224]: dracut-dracut-053 Sep 12 17:40:33.134602 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:40:33.138566 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:40:33.179680 systemd-resolved[238]: Positive Trust Anchors: Sep 12 17:40:33.179712 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:40:33.179749 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:40:33.183492 systemd-resolved[238]: Defaulting to hostname 'linux'. Sep 12 17:40:33.184992 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:40:33.191517 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:40:33.258268 kernel: SCSI subsystem initialized Sep 12 17:40:33.272268 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:40:33.299719 kernel: iscsi: registered transport (tcp) Sep 12 17:40:33.325233 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:40:33.325324 kernel: QLogic iSCSI HBA Driver Sep 12 17:40:33.410212 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:40:33.419444 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:40:33.454730 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:40:33.454820 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:40:33.455953 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:40:33.507237 kernel: raid6: avx2x4 gen() 21594 MB/s Sep 12 17:40:33.524236 kernel: raid6: avx2x2 gen() 22820 MB/s Sep 12 17:40:33.541578 kernel: raid6: avx2x1 gen() 21578 MB/s Sep 12 17:40:33.541675 kernel: raid6: using algorithm avx2x2 gen() 22820 MB/s Sep 12 17:40:33.559391 kernel: raid6: .... xor() 16222 MB/s, rmw enabled Sep 12 17:40:33.559488 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:40:33.586315 kernel: xor: automatically using best checksumming function avx Sep 12 17:40:33.810355 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:40:33.831580 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:40:33.844670 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:40:33.865952 systemd-udevd[416]: Using default interface naming scheme 'v255'. Sep 12 17:40:33.874716 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:40:33.882443 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:40:33.905116 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Sep 12 17:40:33.953226 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:40:33.961546 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:40:34.045923 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:40:34.054444 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:40:34.072209 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:40:34.074496 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:40:34.076619 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:40:34.077990 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:40:34.087473 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:40:34.094217 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 17:40:34.100540 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:40:34.108584 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:40:34.108633 kernel: GPT:9289727 != 19775487 Sep 12 17:40:34.108652 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:40:34.108662 kernel: GPT:9289727 != 19775487 Sep 12 17:40:34.109662 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:40:34.109708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:40:34.109872 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:40:34.123342 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:40:34.124217 kernel: libata version 3.00 loaded. Sep 12 17:40:34.135447 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:40:34.135495 kernel: AES CTR mode by8 optimization enabled Sep 12 17:40:34.146413 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:40:34.148035 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:34.152997 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:34.164394 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:40:34.164692 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:40:34.164718 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 17:40:34.164921 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:40:34.166783 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:40:34.168985 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (459) Sep 12 17:40:34.169001 kernel: scsi host0: ahci Sep 12 17:40:34.170470 kernel: scsi host1: ahci Sep 12 17:40:34.170685 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (458) Sep 12 17:40:34.170700 kernel: scsi host2: ahci Sep 12 17:40:34.170453 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:34.175079 kernel: scsi host3: ahci Sep 12 17:40:34.175397 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:34.178353 kernel: scsi host4: ahci Sep 12 17:40:34.182747 kernel: scsi host5: ahci Sep 12 17:40:34.182997 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 12 17:40:34.183014 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 12 17:40:34.184769 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 12 17:40:34.184797 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 12 17:40:34.187532 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 12 17:40:34.187561 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 12 17:40:34.191568 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:34.222034 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:40:34.236394 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:40:34.248232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:40:34.257650 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:40:34.260735 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:40:34.274775 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:40:34.277582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:40:34.277690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:34.280134 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:34.286392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:34.292214 disk-uuid[554]: Primary Header is updated. Sep 12 17:40:34.292214 disk-uuid[554]: Secondary Entries is updated. Sep 12 17:40:34.292214 disk-uuid[554]: Secondary Header is updated. Sep 12 17:40:34.298858 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:40:34.317014 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:34.338152 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:34.372492 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:34.498295 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:40:34.498398 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:40:34.498412 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:40:34.500239 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 17:40:34.500361 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:40:34.501540 kernel: ata3.00: applying bridge limits Sep 12 17:40:34.501654 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:40:34.503556 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:40:34.503593 kernel: ata3.00: configured for UDMA/100 Sep 12 17:40:34.504224 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:40:34.558651 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:40:34.559105 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:40:34.577246 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:40:35.310248 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:40:35.311428 disk-uuid[556]: The operation has completed successfully. Sep 12 17:40:35.350946 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:40:35.351087 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:40:35.377588 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:40:35.382327 sh[598]: Success Sep 12 17:40:35.399217 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 17:40:35.443527 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:40:35.466721 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:40:35.470786 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:40:35.483269 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:40:35.483350 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:40:35.483367 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:40:35.485624 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:40:35.485649 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:40:35.491305 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:40:35.493244 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:40:35.506409 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:40:35.508555 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:40:35.518501 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:40:35.518559 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:40:35.518573 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:40:35.522220 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:40:35.534488 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:40:35.536068 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:40:35.553074 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:40:35.563458 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:40:35.646471 ignition[690]: Ignition 2.19.0 Sep 12 17:40:35.646508 ignition[690]: Stage: fetch-offline Sep 12 17:40:35.646568 ignition[690]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:35.646580 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:40:35.646742 ignition[690]: parsed url from cmdline: "" Sep 12 17:40:35.646748 ignition[690]: no config URL provided Sep 12 17:40:35.646761 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:40:35.646777 ignition[690]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:40:35.646820 ignition[690]: op(1): [started] loading QEMU firmware config module Sep 12 17:40:35.646830 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:40:35.663970 ignition[690]: op(1): [finished] loading QEMU firmware config module Sep 12 17:40:35.682354 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:40:35.694394 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:40:35.711425 ignition[690]: parsing config with SHA512: 1451081a2b38c8181e59b0f7575474b7b3b908f7d5d1001fddfaf51c80c8e2614b4f4df6fa849387a21fc0c4f502218720e24906df59daada0531566e5732e83 Sep 12 17:40:35.715684 unknown[690]: fetched base config from "system" Sep 12 17:40:35.715937 unknown[690]: fetched user config from "qemu" Sep 12 17:40:35.716406 ignition[690]: fetch-offline: fetch-offline passed Sep 12 17:40:35.716476 ignition[690]: Ignition finished successfully Sep 12 17:40:35.722417 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:40:35.725340 systemd-networkd[787]: lo: Link UP Sep 12 17:40:35.725351 systemd-networkd[787]: lo: Gained carrier Sep 12 17:40:35.727334 systemd-networkd[787]: Enumeration completed Sep 12 17:40:35.727455 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:40:35.727840 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:40:35.727845 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:40:35.728914 systemd-networkd[787]: eth0: Link UP Sep 12 17:40:35.728919 systemd-networkd[787]: eth0: Gained carrier Sep 12 17:40:35.728927 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:40:35.737864 systemd[1]: Reached target network.target - Network. Sep 12 17:40:35.743039 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:40:35.752528 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:40:35.756581 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:40:35.778569 ignition[790]: Ignition 2.19.0 Sep 12 17:40:35.778588 ignition[790]: Stage: kargs Sep 12 17:40:35.778846 ignition[790]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:35.778864 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:40:35.780141 ignition[790]: kargs: kargs passed Sep 12 17:40:35.780233 ignition[790]: Ignition finished successfully Sep 12 17:40:35.785503 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:40:35.800722 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:40:35.815125 ignition[800]: Ignition 2.19.0 Sep 12 17:40:35.815140 ignition[800]: Stage: disks Sep 12 17:40:35.815394 ignition[800]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:35.815407 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:40:35.819146 ignition[800]: disks: disks passed Sep 12 17:40:35.819814 ignition[800]: Ignition finished successfully Sep 12 17:40:35.822821 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:40:35.823180 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:40:35.824833 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:40:35.830019 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:40:35.832174 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:40:35.832308 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:40:35.852588 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:40:35.930017 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:40:35.940560 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:40:35.946462 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:40:36.067700 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:40:36.068473 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:40:36.069394 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:40:36.080445 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:40:36.082911 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:40:36.084232 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:40:36.084315 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:40:36.098732 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (819) Sep 12 17:40:36.098773 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:40:36.098792 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:40:36.098808 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:40:36.084342 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:40:36.102792 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:40:36.092625 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:40:36.100204 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:40:36.104983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:40:36.147940 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:40:36.158163 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:40:36.169264 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:40:36.176512 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:40:36.373340 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:40:36.384385 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:40:36.393755 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:40:36.416307 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:40:36.478323 ignition[932]: INFO : Ignition 2.19.0 Sep 12 17:40:36.480219 ignition[932]: INFO : Stage: mount Sep 12 17:40:36.483409 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:36.483409 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:40:36.483130 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:40:36.488615 ignition[932]: INFO : mount: mount passed Sep 12 17:40:36.488615 ignition[932]: INFO : Ignition finished successfully Sep 12 17:40:36.494359 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:40:36.497062 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:40:36.512475 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:40:36.568168 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:40:36.580574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (946) Sep 12 17:40:36.582233 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:40:36.582284 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:40:36.583831 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:40:36.591278 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:40:36.596223 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:40:36.649460 ignition[963]: INFO : Ignition 2.19.0 Sep 12 17:40:36.649460 ignition[963]: INFO : Stage: files Sep 12 17:40:36.653532 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:36.653532 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:40:36.653532 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:40:36.653532 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:40:36.653532 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:40:36.663803 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:40:36.663803 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:40:36.663803 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:40:36.663803 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:40:36.661030 unknown[963]: wrote ssh authorized keys file for user: core Sep 12 17:40:36.675220 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 17:40:36.730350 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:40:37.117650 systemd-networkd[787]: eth0: Gained IPv6LL Sep 12 17:40:37.222411 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:40:37.222411 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:40:37.222411 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 17:40:37.348097 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:40:37.708360 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:40:37.708360 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:40:37.714414 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 17:40:37.987995 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:40:38.900732 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:40:38.900732 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:40:38.905430 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:40:38.907444 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:40:38.907444 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:40:38.907444 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 17:40:38.907444 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:40:38.907444 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:40:38.907444 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 17:40:38.907444 ignition[963]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:40:39.097244 ignition[963]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:40:39.111527 ignition[963]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:40:39.117866 ignition[963]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:40:39.117866 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:40:39.131150 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:40:39.131150 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:40:39.131150 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:40:39.131150 ignition[963]: INFO : files: files passed Sep 12 17:40:39.131150 ignition[963]: INFO : Ignition finished successfully Sep 12 17:40:39.132967 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:40:39.156242 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:40:39.159816 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:40:39.175006 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:40:39.175224 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:40:39.184229 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:40:39.190581 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:40:39.190581 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:40:39.194510 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:40:39.196024 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:40:39.198610 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:40:39.207570 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:40:39.256648 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:40:39.256836 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:40:39.258780 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:40:39.263327 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:40:39.263533 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:40:39.267536 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:40:39.304063 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:40:39.314951 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:40:39.331059 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:40:39.333281 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:40:39.335141 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:40:39.339510 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:40:39.339700 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:40:39.342390 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:40:39.344461 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:40:39.347324 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:40:39.348472 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:40:39.355220 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:40:39.355426 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:40:39.359894 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:40:39.361700 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:40:39.363799 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:40:39.367424 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:40:39.368830 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:40:39.369049 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:40:39.373770 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:40:39.374715 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:40:39.375142 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:40:39.375550 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:40:39.375948 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:40:39.376138 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:40:39.384500 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:40:39.385617 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:40:39.387944 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:40:39.390401 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:40:39.394380 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:40:39.396279 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:40:39.398803 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:40:39.403744 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:40:39.403898 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:40:39.405972 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:40:39.406105 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:40:39.408123 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:40:39.408359 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:40:39.411350 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:40:39.411512 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:40:39.426643 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:40:39.426839 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:40:39.427866 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:40:39.431989 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:40:39.435209 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:40:39.437113 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:40:39.440132 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:40:39.440388 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:40:39.444313 ignition[1017]: INFO : Ignition 2.19.0 Sep 12 17:40:39.444313 ignition[1017]: INFO : Stage: umount Sep 12 17:40:39.446923 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:39.446923 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:40:39.446923 ignition[1017]: INFO : umount: umount passed Sep 12 17:40:39.446923 ignition[1017]: INFO : Ignition finished successfully Sep 12 17:40:39.454231 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:40:39.455691 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:40:39.467653 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:40:39.468924 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:40:39.477536 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:40:39.479223 systemd[1]: Stopped target network.target - Network. Sep 12 17:40:39.481155 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:40:39.481250 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:40:39.483582 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:40:39.483644 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:40:39.492686 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:40:39.493817 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:40:39.496544 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:40:39.497040 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:40:39.502810 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:40:39.506252 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:40:39.509242 systemd-networkd[787]: eth0: DHCPv6 lease lost Sep 12 17:40:39.512169 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:40:39.513388 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:40:39.519714 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:40:39.519939 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:40:39.526972 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:40:39.527062 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:40:39.538760 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:40:39.539778 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:40:39.539878 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:40:39.541273 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:40:39.541346 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:40:39.541752 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:40:39.541816 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:40:39.542114 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:40:39.542203 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:40:39.542843 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:40:39.558059 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:40:39.558443 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:40:39.567857 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:40:39.568032 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:40:39.568767 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:40:39.568854 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:40:39.572818 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:40:39.572942 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:40:39.611719 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:40:39.611823 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:40:39.617466 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:40:39.617567 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:39.628410 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:40:39.629583 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:40:39.629664 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:40:39.629974 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:40:39.630024 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:39.637016 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:40:39.637170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:40:39.645310 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:40:39.645474 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:40:39.719638 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:40:39.719813 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:40:39.723088 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:40:39.724746 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:40:39.724853 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:40:39.735569 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:40:39.746322 systemd[1]: Switching root. Sep 12 17:40:39.787480 systemd-journald[193]: Journal stopped Sep 12 17:40:41.968030 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 12 17:40:41.968361 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:40:41.968409 kernel: SELinux: policy capability open_perms=1 Sep 12 17:40:41.968471 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:40:41.968492 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:40:41.968520 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:40:41.968551 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:40:41.968571 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:40:41.968588 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:40:41.968610 kernel: audit: type=1403 audit(1757698840.801:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:40:41.968627 systemd[1]: Successfully loaded SELinux policy in 50.875ms. Sep 12 17:40:41.968666 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.661ms. Sep 12 17:40:41.968686 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:40:41.968701 systemd[1]: Detected virtualization kvm. Sep 12 17:40:41.968729 systemd[1]: Detected architecture x86-64. Sep 12 17:40:41.968744 systemd[1]: Detected first boot. Sep 12 17:40:41.968759 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:40:41.968774 zram_generator::config[1061]: No configuration found. Sep 12 17:40:41.968795 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:40:41.968810 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:40:41.968826 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:40:41.968841 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:40:41.968857 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:40:41.968872 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:40:41.968888 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:40:41.968904 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:40:41.968922 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:40:41.968938 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:40:41.968953 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:40:41.968968 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:40:41.968984 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:40:41.968999 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:40:41.969014 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:40:41.969029 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:40:41.969048 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:40:41.969083 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:40:41.969108 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:40:41.969135 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:40:41.969155 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:40:41.969215 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:40:41.969233 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:40:41.969248 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:40:41.969268 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:40:41.969290 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:40:41.969305 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:40:41.969320 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:40:41.969355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:40:41.969391 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:40:41.969409 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:40:41.969431 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:40:41.969453 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:40:41.969484 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:40:41.969513 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:40:41.969529 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:40:41.969543 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:40:41.969558 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:41.969585 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:40:41.969600 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:40:41.969615 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:40:41.969631 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:40:41.969650 systemd[1]: Reached target machines.target - Containers. Sep 12 17:40:41.969665 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:40:41.969680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:40:41.969694 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:40:41.969710 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:40:41.969724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:40:41.969739 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:40:41.969754 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:40:41.969771 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:40:41.969785 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:40:41.969889 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:40:41.969924 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:40:41.969942 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:40:41.969957 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:40:41.969973 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:40:41.970100 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:40:41.970139 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:40:41.970177 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:40:41.970212 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:40:41.970229 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:40:41.970246 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:40:41.970262 systemd[1]: Stopped verity-setup.service. Sep 12 17:40:41.970282 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:41.970306 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:40:41.970324 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:40:41.970354 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:40:41.970375 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:40:41.970415 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:40:41.970431 kernel: ACPI: bus type drm_connector registered Sep 12 17:40:41.970446 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:40:41.970473 kernel: fuse: init (API version 7.39) Sep 12 17:40:41.970489 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:40:41.970505 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:40:41.970525 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:40:41.970540 kernel: loop: module loaded Sep 12 17:40:41.970556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:40:41.970594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:40:41.970608 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:40:41.970636 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:40:41.970664 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:40:41.970705 systemd-journald[1135]: Collecting audit messages is disabled. Sep 12 17:40:41.970741 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:40:41.970755 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:40:41.970780 systemd-journald[1135]: Journal started Sep 12 17:40:41.970803 systemd-journald[1135]: Runtime Journal (/run/log/journal/f5cc9b3cb0df47aa8b1d5a4a44a92f6b) is 6.0M, max 48.3M, 42.2M free. Sep 12 17:40:41.592879 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:40:41.632543 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:40:41.633322 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:40:41.975229 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:40:41.977357 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:40:41.977647 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:40:41.979434 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:40:41.981431 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:40:41.983387 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:40:42.037622 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:40:42.037926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:40:42.045973 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:40:42.055465 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:40:42.061102 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:40:42.062659 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:40:42.062707 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:40:42.065425 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:40:42.069843 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:40:42.076941 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:40:42.079100 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:40:42.082801 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:40:42.090643 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:40:42.092434 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:40:42.094018 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:40:42.095848 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:40:42.102376 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:40:42.117499 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:40:42.186475 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:40:42.193000 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:40:42.194732 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:40:42.196569 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:40:42.207072 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:40:42.215308 systemd-journald[1135]: Time spent on flushing to /var/log/journal/f5cc9b3cb0df47aa8b1d5a4a44a92f6b is 31.777ms for 1001 entries. Sep 12 17:40:42.215308 systemd-journald[1135]: System Journal (/var/log/journal/f5cc9b3cb0df47aa8b1d5a4a44a92f6b) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:40:42.285807 systemd-journald[1135]: Received client request to flush runtime journal. Sep 12 17:40:42.285895 kernel: loop0: detected capacity change from 0 to 142488 Sep 12 17:40:42.216972 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:40:42.234486 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:40:42.237043 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:40:42.245354 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:40:42.402795 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:40:42.405858 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:40:42.423869 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:40:42.424930 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:40:42.431389 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:40:42.434149 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:40:42.447714 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:40:42.464177 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:40:42.498262 kernel: loop1: detected capacity change from 0 to 224512 Sep 12 17:40:42.583221 kernel: loop2: detected capacity change from 0 to 140768 Sep 12 17:40:42.595417 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Sep 12 17:40:42.595448 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Sep 12 17:40:42.610671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:40:42.647234 kernel: loop3: detected capacity change from 0 to 142488 Sep 12 17:40:42.722819 kernel: loop4: detected capacity change from 0 to 224512 Sep 12 17:40:42.745501 kernel: loop5: detected capacity change from 0 to 140768 Sep 12 17:40:42.782294 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:40:42.783145 (sd-merge)[1199]: Merged extensions into '/usr'. Sep 12 17:40:42.790007 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:40:42.790026 systemd[1]: Reloading... Sep 12 17:40:42.948233 zram_generator::config[1228]: No configuration found. Sep 12 17:40:43.157433 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:40:43.186549 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:40:43.243744 systemd[1]: Reloading finished in 453 ms. Sep 12 17:40:43.279847 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:40:43.282316 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:40:43.377516 systemd[1]: Starting ensure-sysext.service... Sep 12 17:40:43.380981 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:40:43.394364 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:40:43.394394 systemd[1]: Reloading... Sep 12 17:40:43.433085 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:40:43.433656 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:40:43.435815 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:40:43.438800 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 12 17:40:43.439051 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 12 17:40:43.459230 zram_generator::config[1292]: No configuration found. Sep 12 17:40:43.618288 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:40:43.618302 systemd-tmpfiles[1263]: Skipping /boot Sep 12 17:40:43.632926 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:40:43.632942 systemd-tmpfiles[1263]: Skipping /boot Sep 12 17:40:43.820951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:40:43.882329 systemd[1]: Reloading finished in 487 ms. Sep 12 17:40:43.903863 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:40:43.919425 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:40:43.965959 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:40:43.991539 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:40:43.999244 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:40:44.008677 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:40:44.022476 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:40:44.033161 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:40:44.051179 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:44.051748 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:40:44.064826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:40:44.074157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:40:44.078154 augenrules[1350]: No rules Sep 12 17:40:44.081625 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:40:44.082886 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Sep 12 17:40:44.085825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:40:44.090296 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:40:44.092047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:44.096368 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:40:44.101290 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:40:44.106295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:40:44.106642 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:40:44.109885 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:40:44.110594 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:40:44.114140 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:40:44.114712 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:40:44.130265 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:40:44.149750 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:40:44.149937 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:40:44.163091 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:40:44.165150 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:40:44.173390 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:40:44.179595 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:44.179924 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:40:44.188650 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:40:44.201635 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:40:44.220686 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:40:44.238623 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:40:44.240363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:40:44.254686 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:40:44.256936 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:44.260256 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:40:44.263174 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:40:44.265511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:40:44.266459 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:40:44.269130 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:40:44.273333 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:40:44.275527 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:40:44.275948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:40:44.281341 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:40:44.281579 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:40:44.359873 systemd[1]: Finished ensure-sysext.service. Sep 12 17:40:44.386125 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:40:44.392981 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:40:44.393926 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:40:44.403240 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1380) Sep 12 17:40:44.408546 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:40:44.411452 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:40:44.571264 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:40:44.601859 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:40:44.599615 systemd-resolved[1338]: Positive Trust Anchors: Sep 12 17:40:44.599640 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:40:44.599701 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:40:44.607838 systemd-resolved[1338]: Defaulting to hostname 'linux'. Sep 12 17:40:44.610532 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:40:44.614235 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:40:44.706670 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 17:40:44.707128 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:40:44.707349 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 17:40:44.710021 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:40:44.717699 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:40:44.719678 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:40:44.726352 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:40:44.734919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:40:44.742587 systemd-networkd[1392]: lo: Link UP Sep 12 17:40:44.742601 systemd-networkd[1392]: lo: Gained carrier Sep 12 17:40:44.746570 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:40:44.761566 systemd-networkd[1392]: Enumeration completed Sep 12 17:40:44.762246 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:40:44.762259 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:40:44.764347 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:40:44.767095 systemd-networkd[1392]: eth0: Link UP Sep 12 17:40:44.767096 systemd[1]: Reached target network.target - Network. Sep 12 17:40:44.767108 systemd-networkd[1392]: eth0: Gained carrier Sep 12 17:40:44.767136 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:40:44.779929 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:40:44.789470 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:40:44.793633 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Sep 12 17:40:45.321293 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:40:45.321374 systemd-timesyncd[1407]: Initial clock synchronization to Fri 2025-09-12 17:40:45.320878 UTC. Sep 12 17:40:45.322765 systemd-resolved[1338]: Clock change detected. Flushing caches. Sep 12 17:40:45.341858 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:40:45.335100 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:45.343353 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:40:45.362253 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:40:45.364925 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:45.369804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:45.596370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:45.645914 kernel: kvm_amd: TSC scaling supported Sep 12 17:40:45.646031 kernel: kvm_amd: Nested Virtualization enabled Sep 12 17:40:45.646051 kernel: kvm_amd: Nested Paging enabled Sep 12 17:40:45.647170 kernel: kvm_amd: LBR virtualization supported Sep 12 17:40:45.647243 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 17:40:45.647862 kernel: kvm_amd: Virtual GIF supported Sep 12 17:40:45.763907 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:40:45.807186 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:40:45.829191 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:40:45.852511 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:40:45.892065 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:40:45.899679 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:40:45.901171 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:40:45.905976 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:40:45.912017 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:40:45.914107 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:40:45.916235 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:40:45.919185 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:40:45.924313 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:40:45.924518 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:40:45.928254 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:40:45.936335 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:40:45.941090 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:40:45.958976 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:40:45.963639 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:40:45.966479 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:40:45.968261 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:40:45.969551 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:40:45.970953 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:40:45.970998 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:40:45.976141 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:40:45.984026 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:40:45.992672 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:40:45.992960 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:40:46.008024 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:40:46.009672 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:40:46.013339 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:40:46.020154 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:40:46.024082 jq[1437]: false Sep 12 17:40:46.024843 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:40:46.030035 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:40:46.037210 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:40:46.039117 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:40:46.040230 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:40:46.041252 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:40:46.046623 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:40:46.054689 dbus-daemon[1436]: [system] SELinux support is enabled Sep 12 17:40:46.054979 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:40:46.055261 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:40:46.055468 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:40:46.060865 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:40:46.064441 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:40:46.067105 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:40:46.070863 extend-filesystems[1438]: Found loop3 Sep 12 17:40:46.071964 extend-filesystems[1438]: Found loop4 Sep 12 17:40:46.071964 extend-filesystems[1438]: Found loop5 Sep 12 17:40:46.071964 extend-filesystems[1438]: Found sr0 Sep 12 17:40:46.071964 extend-filesystems[1438]: Found vda Sep 12 17:40:46.071964 extend-filesystems[1438]: Found vda1 Sep 12 17:40:46.071964 extend-filesystems[1438]: Found vda2 Sep 12 17:40:46.071964 extend-filesystems[1438]: Found vda3 Sep 12 17:40:46.071964 extend-filesystems[1438]: Found usr Sep 12 17:40:46.071964 extend-filesystems[1438]: Found vda4 Sep 12 17:40:46.071964 extend-filesystems[1438]: Found vda6 Sep 12 17:40:46.071964 extend-filesystems[1438]: Found vda7 Sep 12 17:40:46.071964 extend-filesystems[1438]: Found vda9 Sep 12 17:40:46.071964 extend-filesystems[1438]: Checking size of /dev/vda9 Sep 12 17:40:46.080245 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:40:46.080563 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:40:46.096950 jq[1447]: true Sep 12 17:40:46.101503 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:40:46.103654 update_engine[1445]: I20250912 17:40:46.103435 1445 main.cc:92] Flatcar Update Engine starting Sep 12 17:40:46.110114 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:40:46.110171 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:40:46.111865 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:40:46.111903 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:40:46.115891 update_engine[1445]: I20250912 17:40:46.115275 1445 update_check_scheduler.cc:74] Next update check in 10m10s Sep 12 17:40:46.115535 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:40:46.119470 tar[1455]: linux-amd64/LICENSE Sep 12 17:40:46.119905 tar[1455]: linux-amd64/helm Sep 12 17:40:46.166154 extend-filesystems[1438]: Resized partition /dev/vda9 Sep 12 17:40:46.223025 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1380) Sep 12 17:40:46.223067 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:40:46.224092 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:40:46.234221 jq[1465]: true Sep 12 17:40:46.227612 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:40:46.227642 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:40:46.231036 systemd-logind[1444]: New seat seat0. Sep 12 17:40:46.237728 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:40:46.244019 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:40:46.280932 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:40:46.291050 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:40:46.306476 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:40:46.307061 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:40:46.361608 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:40:46.367601 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:40:46.398992 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:40:46.416207 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:40:46.431420 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:40:46.434135 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:40:46.450639 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:40:46.476301 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:40:46.577225 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:40:46.600041 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:45928.service - OpenSSH per-connection server daemon (10.0.0.1:45928). Sep 12 17:40:46.622811 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:40:46.622811 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:40:46.622811 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:40:46.642377 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Sep 12 17:40:46.625041 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:40:46.625378 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:40:46.653323 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:40:46.657214 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:40:46.661292 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:40:46.886527 sshd[1513]: Accepted publickey for core from 10.0.0.1 port 45928 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:40:46.899320 sshd[1513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:46.911293 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:40:46.959256 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:40:46.964947 systemd-logind[1444]: New session 1 of user core. Sep 12 17:40:46.999687 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:40:47.023158 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:40:47.034842 (systemd)[1525]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:40:47.118734 containerd[1460]: time="2025-09-12T17:40:47.118272693Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:40:47.165554 containerd[1460]: time="2025-09-12T17:40:47.165414819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:47.170150 containerd[1460]: time="2025-09-12T17:40:47.169883330Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:40:47.170150 containerd[1460]: time="2025-09-12T17:40:47.169942090Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:40:47.170150 containerd[1460]: time="2025-09-12T17:40:47.169965905Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:40:47.170303 containerd[1460]: time="2025-09-12T17:40:47.170237374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:40:47.170303 containerd[1460]: time="2025-09-12T17:40:47.170257282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:47.170413 containerd[1460]: time="2025-09-12T17:40:47.170346890Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:40:47.170413 containerd[1460]: time="2025-09-12T17:40:47.170368300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:47.170742 containerd[1460]: time="2025-09-12T17:40:47.170650639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:40:47.170742 containerd[1460]: time="2025-09-12T17:40:47.170686537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:47.170742 containerd[1460]: time="2025-09-12T17:40:47.170740268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:40:47.170880 containerd[1460]: time="2025-09-12T17:40:47.170755366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:47.171740 containerd[1460]: time="2025-09-12T17:40:47.170913262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:47.171740 containerd[1460]: time="2025-09-12T17:40:47.171230517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:47.171740 containerd[1460]: time="2025-09-12T17:40:47.171378334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:40:47.171740 containerd[1460]: time="2025-09-12T17:40:47.171412078Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:40:47.171740 containerd[1460]: time="2025-09-12T17:40:47.171550167Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:40:47.171740 containerd[1460]: time="2025-09-12T17:40:47.171617373Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:40:47.218239 containerd[1460]: time="2025-09-12T17:40:47.218170453Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:40:47.218718 containerd[1460]: time="2025-09-12T17:40:47.218478080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:40:47.218718 containerd[1460]: time="2025-09-12T17:40:47.218572217Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:40:47.218718 containerd[1460]: time="2025-09-12T17:40:47.218591332Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:40:47.218718 containerd[1460]: time="2025-09-12T17:40:47.218612001Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:40:47.219155 containerd[1460]: time="2025-09-12T17:40:47.219085219Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219528029Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219670847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219688941Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219720080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219736541Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219750938Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219765595Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219781315Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219796924Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219811852Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219826159Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219840005Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219873027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.220720 containerd[1460]: time="2025-09-12T17:40:47.219891511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.219907301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.219924233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.219939902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.219957094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.219972062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.219993703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.220008871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.220024741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.220039178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.220055799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.220071860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.220090785Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.220113357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.220126883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221057 containerd[1460]: time="2025-09-12T17:40:47.220140749Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:40:47.221444 containerd[1460]: time="2025-09-12T17:40:47.220204799Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:40:47.221444 containerd[1460]: time="2025-09-12T17:40:47.220229064Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:40:47.221444 containerd[1460]: time="2025-09-12T17:40:47.220241508Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:40:47.221444 containerd[1460]: time="2025-09-12T17:40:47.220256185Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:40:47.221444 containerd[1460]: time="2025-09-12T17:40:47.220267867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221444 containerd[1460]: time="2025-09-12T17:40:47.220291562Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:40:47.221444 containerd[1460]: time="2025-09-12T17:40:47.220314885Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:40:47.221444 containerd[1460]: time="2025-09-12T17:40:47.220326567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:40:47.221687 containerd[1460]: time="2025-09-12T17:40:47.220673097Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:40:47.221919 containerd[1460]: time="2025-09-12T17:40:47.221902443Z" level=info msg="Connect containerd service" Sep 12 17:40:47.222069 containerd[1460]: time="2025-09-12T17:40:47.222053877Z" level=info msg="using legacy CRI server" Sep 12 17:40:47.222143 containerd[1460]: time="2025-09-12T17:40:47.222113229Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:40:47.222422 containerd[1460]: time="2025-09-12T17:40:47.222402070Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:40:47.223657 containerd[1460]: time="2025-09-12T17:40:47.223632819Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:40:47.224245 containerd[1460]: time="2025-09-12T17:40:47.224225831Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:40:47.224537 containerd[1460]: time="2025-09-12T17:40:47.224363079Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:40:47.224719 containerd[1460]: time="2025-09-12T17:40:47.224649426Z" level=info msg="Start subscribing containerd event" Sep 12 17:40:47.224818 containerd[1460]: time="2025-09-12T17:40:47.224803615Z" level=info msg="Start recovering state" Sep 12 17:40:47.224970 containerd[1460]: time="2025-09-12T17:40:47.224955720Z" level=info msg="Start event monitor" Sep 12 17:40:47.225047 containerd[1460]: time="2025-09-12T17:40:47.225033807Z" level=info msg="Start snapshots syncer" Sep 12 17:40:47.225138 containerd[1460]: time="2025-09-12T17:40:47.225122774Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:40:47.225211 containerd[1460]: time="2025-09-12T17:40:47.225198375Z" level=info msg="Start streaming server" Sep 12 17:40:47.225472 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:40:47.226286 containerd[1460]: time="2025-09-12T17:40:47.226267761Z" level=info msg="containerd successfully booted in 0.110029s" Sep 12 17:40:47.232167 systemd-networkd[1392]: eth0: Gained IPv6LL Sep 12 17:40:47.240159 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:40:47.243315 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:40:47.272843 systemd[1525]: Queued start job for default target default.target. Sep 12 17:40:47.313406 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:40:47.318416 systemd[1525]: Created slice app.slice - User Application Slice. Sep 12 17:40:47.318448 systemd[1525]: Reached target paths.target - Paths. Sep 12 17:40:47.318464 systemd[1525]: Reached target timers.target - Timers. Sep 12 17:40:47.323860 systemd[1525]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:40:47.330620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:40:47.343945 systemd[1525]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:40:47.344292 systemd[1525]: Reached target sockets.target - Sockets. Sep 12 17:40:47.344403 systemd[1525]: Reached target basic.target - Basic System. Sep 12 17:40:47.344571 systemd[1525]: Reached target default.target - Main User Target. Sep 12 17:40:47.344747 systemd[1525]: Startup finished in 232ms. Sep 12 17:40:47.345849 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:40:47.348679 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:40:47.415798 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:40:47.447658 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:40:47.450531 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:40:47.450816 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:40:47.454151 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:40:47.516440 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:45940.service - OpenSSH per-connection server daemon (10.0.0.1:45940). Sep 12 17:40:47.654740 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 45940 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:40:47.657819 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:47.660420 tar[1455]: linux-amd64/README.md Sep 12 17:40:47.698855 systemd-logind[1444]: New session 2 of user core. Sep 12 17:40:47.701655 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:40:47.708538 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:40:47.775611 sshd[1557]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:47.785453 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:45940.service: Deactivated successfully. Sep 12 17:40:47.788798 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:40:47.790633 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:40:47.808284 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:45950.service - OpenSSH per-connection server daemon (10.0.0.1:45950). Sep 12 17:40:47.813753 systemd-logind[1444]: Removed session 2. Sep 12 17:40:47.858437 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 45950 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:40:47.860854 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:47.871127 systemd-logind[1444]: New session 3 of user core. Sep 12 17:40:47.883691 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:40:47.951942 sshd[1567]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:47.957336 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:45950.service: Deactivated successfully. Sep 12 17:40:47.960606 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:40:47.963913 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:40:47.965512 systemd-logind[1444]: Removed session 3. Sep 12 17:40:49.584872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:40:49.587655 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:40:49.589717 systemd[1]: Startup finished in 1.672s (kernel) + 8.036s (initrd) + 8.311s (userspace) = 18.019s. Sep 12 17:40:49.596025 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:40:50.629518 kubelet[1579]: E0912 17:40:50.629413 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:40:50.635442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:40:50.635749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:40:50.636257 systemd[1]: kubelet.service: Consumed 2.706s CPU time. Sep 12 17:40:57.963664 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:45224.service - OpenSSH per-connection server daemon (10.0.0.1:45224). Sep 12 17:40:58.001025 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 45224 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:40:58.003160 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:58.008683 systemd-logind[1444]: New session 4 of user core. Sep 12 17:40:58.015888 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:40:58.074874 sshd[1593]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:58.087658 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:45224.service: Deactivated successfully. Sep 12 17:40:58.090133 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:40:58.092107 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:40:58.101088 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:45240.service - OpenSSH per-connection server daemon (10.0.0.1:45240). Sep 12 17:40:58.102292 systemd-logind[1444]: Removed session 4. Sep 12 17:40:58.135106 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 45240 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:40:58.137013 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:58.144524 systemd-logind[1444]: New session 5 of user core. Sep 12 17:40:58.158074 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:40:58.211858 sshd[1600]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:58.221596 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:45240.service: Deactivated successfully. Sep 12 17:40:58.224286 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:40:58.226478 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:40:58.235504 systemd[1]: Started sshd@5-10.0.0.128:22-10.0.0.1:45244.service - OpenSSH per-connection server daemon (10.0.0.1:45244). Sep 12 17:40:58.237021 systemd-logind[1444]: Removed session 5. Sep 12 17:40:58.272104 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 45244 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:40:58.275318 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:58.281890 systemd-logind[1444]: New session 6 of user core. Sep 12 17:40:58.291013 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:40:58.353232 sshd[1607]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:58.366254 systemd[1]: sshd@5-10.0.0.128:22-10.0.0.1:45244.service: Deactivated successfully. Sep 12 17:40:58.368849 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:40:58.370739 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:40:58.382348 systemd[1]: Started sshd@6-10.0.0.128:22-10.0.0.1:45252.service - OpenSSH per-connection server daemon (10.0.0.1:45252). Sep 12 17:40:58.383794 systemd-logind[1444]: Removed session 6. Sep 12 17:40:58.414415 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 45252 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:40:58.416554 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:58.424577 systemd-logind[1444]: New session 7 of user core. Sep 12 17:40:58.433865 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:40:58.498034 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:40:58.498424 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:40:58.520153 sudo[1617]: pam_unix(sudo:session): session closed for user root Sep 12 17:40:58.523655 sshd[1614]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:58.540399 systemd[1]: sshd@6-10.0.0.128:22-10.0.0.1:45252.service: Deactivated successfully. Sep 12 17:40:58.542749 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:40:58.544456 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:40:58.556230 systemd[1]: Started sshd@7-10.0.0.128:22-10.0.0.1:45256.service - OpenSSH per-connection server daemon (10.0.0.1:45256). Sep 12 17:40:58.557419 systemd-logind[1444]: Removed session 7. Sep 12 17:40:58.587610 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 45256 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:40:58.589976 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:58.596499 systemd-logind[1444]: New session 8 of user core. Sep 12 17:40:58.611863 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:40:58.670641 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:40:58.671101 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:40:58.676250 sudo[1626]: pam_unix(sudo:session): session closed for user root Sep 12 17:40:58.685844 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:40:58.686348 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:40:58.707114 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:40:58.709409 auditctl[1629]: No rules Sep 12 17:40:58.710967 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:40:58.711322 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:40:58.714407 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:40:58.753274 augenrules[1647]: No rules Sep 12 17:40:58.754263 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:40:58.756002 sudo[1625]: pam_unix(sudo:session): session closed for user root Sep 12 17:40:58.758252 sshd[1622]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:58.766732 systemd[1]: sshd@7-10.0.0.128:22-10.0.0.1:45256.service: Deactivated successfully. Sep 12 17:40:58.769074 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:40:58.770800 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:40:58.780310 systemd[1]: Started sshd@8-10.0.0.128:22-10.0.0.1:45270.service - OpenSSH per-connection server daemon (10.0.0.1:45270). Sep 12 17:40:58.781583 systemd-logind[1444]: Removed session 8. Sep 12 17:40:58.814676 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 45270 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:40:58.816810 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:58.827045 systemd-logind[1444]: New session 9 of user core. Sep 12 17:40:58.839383 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:40:58.901468 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:40:58.901988 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:40:59.598012 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:40:59.598195 (dockerd)[1677]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:41:00.463954 dockerd[1677]: time="2025-09-12T17:41:00.463851184Z" level=info msg="Starting up" Sep 12 17:41:00.825001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:41:00.906445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:00.952563 dockerd[1677]: time="2025-09-12T17:41:00.952475594Z" level=info msg="Loading containers: start." Sep 12 17:41:01.183430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:01.188581 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:01.904661 kubelet[1742]: E0912 17:41:01.904520 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:01.913265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:01.913524 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:02.617767 kernel: Initializing XFRM netlink socket Sep 12 17:41:02.732917 systemd-networkd[1392]: docker0: Link UP Sep 12 17:41:03.114971 dockerd[1677]: time="2025-09-12T17:41:03.114685181Z" level=info msg="Loading containers: done." Sep 12 17:41:03.137238 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck992485700-merged.mount: Deactivated successfully. Sep 12 17:41:03.251343 dockerd[1677]: time="2025-09-12T17:41:03.251262822Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:41:03.251525 dockerd[1677]: time="2025-09-12T17:41:03.251428723Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:41:03.251661 dockerd[1677]: time="2025-09-12T17:41:03.251625041Z" level=info msg="Daemon has completed initialization" Sep 12 17:41:03.515954 dockerd[1677]: time="2025-09-12T17:41:03.515668661Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:41:03.516027 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:41:04.701711 containerd[1460]: time="2025-09-12T17:41:04.701618434Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:41:05.779491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1881752680.mount: Deactivated successfully. Sep 12 17:41:12.164030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:41:12.420126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:12.849068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:12.849894 (kubelet)[1907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:13.205397 containerd[1460]: time="2025-09-12T17:41:13.203486719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:13.206717 containerd[1460]: time="2025-09-12T17:41:13.206434558Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 17:41:13.210274 containerd[1460]: time="2025-09-12T17:41:13.210160076Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:13.216644 containerd[1460]: time="2025-09-12T17:41:13.216554590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:13.219608 containerd[1460]: time="2025-09-12T17:41:13.219106046Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 8.517423882s" Sep 12 17:41:13.219608 containerd[1460]: time="2025-09-12T17:41:13.219175136Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 17:41:13.220236 containerd[1460]: time="2025-09-12T17:41:13.220148502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:41:13.419166 kubelet[1907]: E0912 17:41:13.419061 1907 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:13.425586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:13.426035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:17.103544 containerd[1460]: time="2025-09-12T17:41:17.103457519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:17.236656 containerd[1460]: time="2025-09-12T17:41:17.236552407Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 17:41:17.335956 containerd[1460]: time="2025-09-12T17:41:17.335864669Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:17.381607 containerd[1460]: time="2025-09-12T17:41:17.381419817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:17.382936 containerd[1460]: time="2025-09-12T17:41:17.382851102Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 4.162648969s" Sep 12 17:41:17.382936 containerd[1460]: time="2025-09-12T17:41:17.382886358Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 17:41:17.383486 containerd[1460]: time="2025-09-12T17:41:17.383452390Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:41:20.039066 containerd[1460]: time="2025-09-12T17:41:20.038965747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:20.039918 containerd[1460]: time="2025-09-12T17:41:20.039829852Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 17:41:20.041868 containerd[1460]: time="2025-09-12T17:41:20.041833553Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:20.046668 containerd[1460]: time="2025-09-12T17:41:20.046583225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:20.047869 containerd[1460]: time="2025-09-12T17:41:20.047813853Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.664324964s" Sep 12 17:41:20.047869 containerd[1460]: time="2025-09-12T17:41:20.047866023Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 17:41:20.048499 containerd[1460]: time="2025-09-12T17:41:20.048444702Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:41:21.664385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1243240118.mount: Deactivated successfully. Sep 12 17:41:23.209660 containerd[1460]: time="2025-09-12T17:41:23.209554837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:23.217148 containerd[1460]: time="2025-09-12T17:41:23.217034077Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 17:41:23.219224 containerd[1460]: time="2025-09-12T17:41:23.219141270Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:23.223232 containerd[1460]: time="2025-09-12T17:41:23.223138842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:23.224074 containerd[1460]: time="2025-09-12T17:41:23.224016078Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 3.175531558s" Sep 12 17:41:23.224074 containerd[1460]: time="2025-09-12T17:41:23.224051886Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 17:41:23.224643 containerd[1460]: time="2025-09-12T17:41:23.224601926Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:41:23.676231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:41:23.769479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:24.005984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:24.012724 (kubelet)[1940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:24.118170 kubelet[1940]: E0912 17:41:24.118003 1940 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:24.124335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:24.124693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:24.487247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701546294.mount: Deactivated successfully. Sep 12 17:41:25.470836 containerd[1460]: time="2025-09-12T17:41:25.470756723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:25.471572 containerd[1460]: time="2025-09-12T17:41:25.471498896Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:41:25.472888 containerd[1460]: time="2025-09-12T17:41:25.472848187Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:25.476447 containerd[1460]: time="2025-09-12T17:41:25.476372111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:25.477784 containerd[1460]: time="2025-09-12T17:41:25.477733083Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.253045384s" Sep 12 17:41:25.477784 containerd[1460]: time="2025-09-12T17:41:25.477778541Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:41:25.478340 containerd[1460]: time="2025-09-12T17:41:25.478309732Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:41:26.247821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673776332.mount: Deactivated successfully. Sep 12 17:41:26.255484 containerd[1460]: time="2025-09-12T17:41:26.255402393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:26.256445 containerd[1460]: time="2025-09-12T17:41:26.256366258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:41:26.257826 containerd[1460]: time="2025-09-12T17:41:26.257781552Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:26.260759 containerd[1460]: time="2025-09-12T17:41:26.260721878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:26.261602 containerd[1460]: time="2025-09-12T17:41:26.261552820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 783.212159ms" Sep 12 17:41:26.261670 containerd[1460]: time="2025-09-12T17:41:26.261607695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:41:26.262259 containerd[1460]: time="2025-09-12T17:41:26.262220812Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:41:26.903211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197927744.mount: Deactivated successfully. Sep 12 17:41:30.670143 containerd[1460]: time="2025-09-12T17:41:30.670059039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:30.670989 containerd[1460]: time="2025-09-12T17:41:30.670836895Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 17:41:30.672305 containerd[1460]: time="2025-09-12T17:41:30.672264473Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:30.676201 containerd[1460]: time="2025-09-12T17:41:30.676158179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:30.678251 containerd[1460]: time="2025-09-12T17:41:30.678182149Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.41592613s" Sep 12 17:41:30.678310 containerd[1460]: time="2025-09-12T17:41:30.678252112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 17:41:31.601255 update_engine[1445]: I20250912 17:41:31.601152 1445 update_attempter.cc:509] Updating boot flags... Sep 12 17:41:31.728749 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2088) Sep 12 17:41:31.838720 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2088) Sep 12 17:41:32.952318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:32.965130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:32.995614 systemd[1]: Reloading requested from client PID 2103 ('systemctl') (unit session-9.scope)... Sep 12 17:41:32.995633 systemd[1]: Reloading... Sep 12 17:41:33.094876 zram_generator::config[2146]: No configuration found. Sep 12 17:41:33.365289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:33.464739 systemd[1]: Reloading finished in 468 ms. Sep 12 17:41:33.526031 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:41:33.526165 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:41:33.526547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:33.530287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:33.748039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:33.754852 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:41:33.825843 kubelet[2191]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:33.825843 kubelet[2191]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:41:33.825843 kubelet[2191]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:33.826422 kubelet[2191]: I0912 17:41:33.825898 2191 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:41:34.138750 kubelet[2191]: I0912 17:41:34.138558 2191 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:41:34.138750 kubelet[2191]: I0912 17:41:34.138600 2191 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:41:34.138930 kubelet[2191]: I0912 17:41:34.138908 2191 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:41:34.169314 kubelet[2191]: E0912 17:41:34.169184 2191 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:34.171994 kubelet[2191]: I0912 17:41:34.171944 2191 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:41:34.183273 kubelet[2191]: E0912 17:41:34.183216 2191 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:41:34.183273 kubelet[2191]: I0912 17:41:34.183261 2191 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:41:34.190506 kubelet[2191]: I0912 17:41:34.190453 2191 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:41:34.191761 kubelet[2191]: I0912 17:41:34.191665 2191 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:41:34.191998 kubelet[2191]: I0912 17:41:34.191742 2191 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:41:34.192125 kubelet[2191]: I0912 17:41:34.192006 2191 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:41:34.192125 kubelet[2191]: I0912 17:41:34.192017 2191 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:41:34.192231 kubelet[2191]: I0912 17:41:34.192210 2191 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:34.196789 kubelet[2191]: I0912 17:41:34.196733 2191 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:41:34.196846 kubelet[2191]: I0912 17:41:34.196810 2191 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:41:34.196881 kubelet[2191]: I0912 17:41:34.196853 2191 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:41:34.197532 kubelet[2191]: I0912 17:41:34.197061 2191 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:41:34.200478 kubelet[2191]: I0912 17:41:34.200443 2191 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:41:34.200991 kubelet[2191]: I0912 17:41:34.200942 2191 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:41:34.204156 kubelet[2191]: W0912 17:41:34.202139 2191 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:41:34.204156 kubelet[2191]: W0912 17:41:34.202404 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Sep 12 17:41:34.204156 kubelet[2191]: E0912 17:41:34.202464 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:34.204156 kubelet[2191]: W0912 17:41:34.203990 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Sep 12 17:41:34.204156 kubelet[2191]: E0912 17:41:34.204057 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:34.205006 kubelet[2191]: I0912 17:41:34.204959 2191 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:41:34.205069 kubelet[2191]: I0912 17:41:34.205014 2191 server.go:1287] "Started kubelet" Sep 12 17:41:34.205746 kubelet[2191]: I0912 17:41:34.205174 2191 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:41:34.207350 kubelet[2191]: I0912 17:41:34.207319 2191 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:41:34.208090 kubelet[2191]: I0912 17:41:34.208006 2191 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:41:34.208874 kubelet[2191]: I0912 17:41:34.208394 2191 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:41:34.208927 kubelet[2191]: I0912 17:41:34.208875 2191 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:41:34.210482 kubelet[2191]: I0912 17:41:34.209945 2191 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:41:34.210482 kubelet[2191]: E0912 17:41:34.210362 2191 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:41:34.210482 kubelet[2191]: E0912 17:41:34.210427 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:34.210482 kubelet[2191]: I0912 17:41:34.210454 2191 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:41:34.210635 kubelet[2191]: I0912 17:41:34.210614 2191 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:41:34.210739 kubelet[2191]: I0912 17:41:34.210718 2191 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:41:34.211805 kubelet[2191]: W0912 17:41:34.211755 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Sep 12 17:41:34.211852 kubelet[2191]: E0912 17:41:34.211809 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:34.212291 kubelet[2191]: I0912 17:41:34.212097 2191 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:41:34.212291 kubelet[2191]: I0912 17:41:34.212277 2191 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:41:34.213222 kubelet[2191]: E0912 17:41:34.213171 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="200ms" Sep 12 17:41:34.214801 kubelet[2191]: E0912 17:41:34.212499 2191 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186499d4e32f55bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:41:34.204982717 +0000 UTC m=+0.444702466,LastTimestamp:2025-09-12 17:41:34.204982717 +0000 UTC m=+0.444702466,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:41:34.218117 kubelet[2191]: I0912 17:41:34.218082 2191 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:41:34.232785 kubelet[2191]: I0912 17:41:34.232723 2191 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:41:34.234157 kubelet[2191]: I0912 17:41:34.234122 2191 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:41:34.234157 kubelet[2191]: I0912 17:41:34.234154 2191 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:41:34.234258 kubelet[2191]: I0912 17:41:34.234179 2191 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:41:34.234258 kubelet[2191]: I0912 17:41:34.234187 2191 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:41:34.234258 kubelet[2191]: E0912 17:41:34.234242 2191 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:41:34.234993 kubelet[2191]: W0912 17:41:34.234874 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Sep 12 17:41:34.234993 kubelet[2191]: E0912 17:41:34.234947 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:34.239607 kubelet[2191]: I0912 17:41:34.239575 2191 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:41:34.239788 kubelet[2191]: I0912 17:41:34.239756 2191 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:41:34.239788 kubelet[2191]: I0912 17:41:34.239794 2191 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:34.310891 kubelet[2191]: E0912 17:41:34.310844 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:34.334549 kubelet[2191]: E0912 17:41:34.334440 2191 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:41:34.411864 kubelet[2191]: E0912 17:41:34.411790 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:34.414747 kubelet[2191]: E0912 17:41:34.414681 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="400ms" Sep 12 17:41:34.512002 kubelet[2191]: E0912 17:41:34.511939 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:34.535365 kubelet[2191]: E0912 17:41:34.535295 2191 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:41:34.612875 kubelet[2191]: E0912 17:41:34.612807 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:34.713300 kubelet[2191]: E0912 17:41:34.713096 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:34.814256 kubelet[2191]: E0912 17:41:34.814104 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:34.816497 kubelet[2191]: E0912 17:41:34.816429 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="800ms" Sep 12 17:41:34.869691 kubelet[2191]: I0912 17:41:34.869577 2191 policy_none.go:49] "None policy: Start" Sep 12 17:41:34.869691 kubelet[2191]: I0912 17:41:34.869669 2191 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:41:34.869691 kubelet[2191]: I0912 17:41:34.869725 2191 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:41:34.879301 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:41:34.901555 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:41:34.905557 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:41:34.915184 kubelet[2191]: E0912 17:41:34.915129 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:34.917748 kubelet[2191]: I0912 17:41:34.917686 2191 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:41:34.917990 kubelet[2191]: I0912 17:41:34.917976 2191 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:41:34.918157 kubelet[2191]: I0912 17:41:34.917994 2191 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:41:34.918418 kubelet[2191]: I0912 17:41:34.918384 2191 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:41:34.919012 kubelet[2191]: E0912 17:41:34.918980 2191 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:41:34.919104 kubelet[2191]: E0912 17:41:34.919037 2191 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:41:34.946474 systemd[1]: Created slice kubepods-burstable-pod93f77d9ce2260f277f61f2c66adb751f.slice - libcontainer container kubepods-burstable-pod93f77d9ce2260f277f61f2c66adb751f.slice. Sep 12 17:41:34.967379 kubelet[2191]: E0912 17:41:34.967221 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:41:34.970856 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 17:41:34.988985 kubelet[2191]: E0912 17:41:34.988923 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:41:34.992596 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 17:41:34.994895 kubelet[2191]: E0912 17:41:34.994871 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:41:35.015615 kubelet[2191]: I0912 17:41:35.015549 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93f77d9ce2260f277f61f2c66adb751f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"93f77d9ce2260f277f61f2c66adb751f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:35.015764 kubelet[2191]: I0912 17:41:35.015625 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:35.015764 kubelet[2191]: I0912 17:41:35.015674 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:35.015852 kubelet[2191]: I0912 17:41:35.015781 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:35.015882 kubelet[2191]: I0912 17:41:35.015868 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93f77d9ce2260f277f61f2c66adb751f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"93f77d9ce2260f277f61f2c66adb751f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:35.015922 kubelet[2191]: I0912 17:41:35.015896 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93f77d9ce2260f277f61f2c66adb751f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"93f77d9ce2260f277f61f2c66adb751f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:35.015951 kubelet[2191]: I0912 17:41:35.015920 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:35.015993 kubelet[2191]: I0912 17:41:35.015949 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:35.015993 kubelet[2191]: I0912 17:41:35.015968 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:35.020240 kubelet[2191]: I0912 17:41:35.020210 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:41:35.020806 kubelet[2191]: E0912 17:41:35.020744 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Sep 12 17:41:35.202494 kubelet[2191]: W0912 17:41:35.202335 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Sep 12 17:41:35.202494 kubelet[2191]: E0912 17:41:35.202422 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:35.222919 kubelet[2191]: I0912 17:41:35.222732 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:41:35.223305 kubelet[2191]: E0912 17:41:35.223268 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Sep 12 17:41:35.268261 kubelet[2191]: E0912 17:41:35.268195 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:35.269222 containerd[1460]: time="2025-09-12T17:41:35.269158142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:93f77d9ce2260f277f61f2c66adb751f,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:35.290675 kubelet[2191]: E0912 17:41:35.290566 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:35.291714 containerd[1460]: time="2025-09-12T17:41:35.291639047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:35.295977 kubelet[2191]: E0912 17:41:35.295922 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:35.296609 containerd[1460]: time="2025-09-12T17:41:35.296527201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:35.465095 kubelet[2191]: W0912 17:41:35.464987 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Sep 12 17:41:35.465095 kubelet[2191]: E0912 17:41:35.465099 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:35.618169 kubelet[2191]: E0912 17:41:35.617980 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="1.6s" Sep 12 17:41:35.637146 kubelet[2191]: I0912 17:41:35.637087 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:41:35.637738 kubelet[2191]: E0912 17:41:35.637667 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Sep 12 17:41:35.675937 kubelet[2191]: W0912 17:41:35.675825 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Sep 12 17:41:35.676056 kubelet[2191]: E0912 17:41:35.675963 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:35.737727 kubelet[2191]: W0912 17:41:35.737632 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Sep 12 17:41:35.737878 kubelet[2191]: E0912 17:41:35.737755 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:36.334622 kubelet[2191]: E0912 17:41:36.334535 2191 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:36.440290 kubelet[2191]: I0912 17:41:36.440247 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:41:36.440875 kubelet[2191]: E0912 17:41:36.440806 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Sep 12 17:41:36.681922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount188711445.mount: Deactivated successfully. Sep 12 17:41:36.695283 containerd[1460]: time="2025-09-12T17:41:36.695204533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:36.696177 containerd[1460]: time="2025-09-12T17:41:36.696099665Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:41:36.697306 containerd[1460]: time="2025-09-12T17:41:36.697244569Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:36.698303 containerd[1460]: time="2025-09-12T17:41:36.698243106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:36.699199 containerd[1460]: time="2025-09-12T17:41:36.699118160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:41:36.701369 containerd[1460]: time="2025-09-12T17:41:36.701260019Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:41:36.701369 containerd[1460]: time="2025-09-12T17:41:36.701284264Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:36.704311 containerd[1460]: time="2025-09-12T17:41:36.704229281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:36.708278 containerd[1460]: time="2025-09-12T17:41:36.708195539Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.416419231s" Sep 12 17:41:36.710547 containerd[1460]: time="2025-09-12T17:41:36.710474907Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.413804574s" Sep 12 17:41:36.718397 containerd[1460]: time="2025-09-12T17:41:36.718323872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.449053389s" Sep 12 17:41:36.935760 containerd[1460]: time="2025-09-12T17:41:36.934764868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:36.935760 containerd[1460]: time="2025-09-12T17:41:36.935374270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:36.935760 containerd[1460]: time="2025-09-12T17:41:36.935413935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:36.935760 containerd[1460]: time="2025-09-12T17:41:36.935633661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:36.936849 containerd[1460]: time="2025-09-12T17:41:36.936539973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:36.936849 containerd[1460]: time="2025-09-12T17:41:36.936586552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:36.936849 containerd[1460]: time="2025-09-12T17:41:36.936599226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:36.936849 containerd[1460]: time="2025-09-12T17:41:36.936739110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:36.941212 containerd[1460]: time="2025-09-12T17:41:36.940945311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:36.941212 containerd[1460]: time="2025-09-12T17:41:36.940999633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:36.941212 containerd[1460]: time="2025-09-12T17:41:36.941016655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:36.941212 containerd[1460]: time="2025-09-12T17:41:36.941097548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:37.076006 systemd[1]: Started cri-containerd-876635b88662983803a53ce4b2868e993375267626d638f49a47d4362e3b41c8.scope - libcontainer container 876635b88662983803a53ce4b2868e993375267626d638f49a47d4362e3b41c8. Sep 12 17:41:37.081009 systemd[1]: Started cri-containerd-fcefd651753ac3cc393f7936d85f8994fb736ec2a92160a1193b68cdf83932e7.scope - libcontainer container fcefd651753ac3cc393f7936d85f8994fb736ec2a92160a1193b68cdf83932e7. Sep 12 17:41:37.091131 systemd[1]: Started cri-containerd-d623c1015494307df0805eaff65817b7e7998462c462e57c0d0d7735dede7f95.scope - libcontainer container d623c1015494307df0805eaff65817b7e7998462c462e57c0d0d7735dede7f95. Sep 12 17:41:37.156242 containerd[1460]: time="2025-09-12T17:41:37.156200215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d623c1015494307df0805eaff65817b7e7998462c462e57c0d0d7735dede7f95\"" Sep 12 17:41:37.159372 containerd[1460]: time="2025-09-12T17:41:37.159301063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"876635b88662983803a53ce4b2868e993375267626d638f49a47d4362e3b41c8\"" Sep 12 17:41:37.161240 kubelet[2191]: E0912 17:41:37.161205 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:37.161860 kubelet[2191]: E0912 17:41:37.161728 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:37.164516 containerd[1460]: time="2025-09-12T17:41:37.164494114Z" level=info msg="CreateContainer within sandbox \"876635b88662983803a53ce4b2868e993375267626d638f49a47d4362e3b41c8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:41:37.165071 containerd[1460]: time="2025-09-12T17:41:37.165038993Z" level=info msg="CreateContainer within sandbox \"d623c1015494307df0805eaff65817b7e7998462c462e57c0d0d7735dede7f95\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:41:37.165537 containerd[1460]: time="2025-09-12T17:41:37.165496708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:93f77d9ce2260f277f61f2c66adb751f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcefd651753ac3cc393f7936d85f8994fb736ec2a92160a1193b68cdf83932e7\"" Sep 12 17:41:37.166282 kubelet[2191]: E0912 17:41:37.166246 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:37.168300 containerd[1460]: time="2025-09-12T17:41:37.168256182Z" level=info msg="CreateContainer within sandbox \"fcefd651753ac3cc393f7936d85f8994fb736ec2a92160a1193b68cdf83932e7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:41:37.195522 containerd[1460]: time="2025-09-12T17:41:37.195341894Z" level=info msg="CreateContainer within sandbox \"d623c1015494307df0805eaff65817b7e7998462c462e57c0d0d7735dede7f95\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"81ab08ef092aad9ae11060f347430479085fe430d81edc03f4c9817ffb052f45\"" Sep 12 17:41:37.196421 containerd[1460]: time="2025-09-12T17:41:37.196380847Z" level=info msg="StartContainer for \"81ab08ef092aad9ae11060f347430479085fe430d81edc03f4c9817ffb052f45\"" Sep 12 17:41:37.197045 containerd[1460]: time="2025-09-12T17:41:37.197003253Z" level=info msg="CreateContainer within sandbox \"876635b88662983803a53ce4b2868e993375267626d638f49a47d4362e3b41c8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4a1261ba5cd25fd1978fb949d6e5e61d4d9bd7860a9e1f74d7164379919da652\"" Sep 12 17:41:37.197827 containerd[1460]: time="2025-09-12T17:41:37.197798045Z" level=info msg="StartContainer for \"4a1261ba5cd25fd1978fb949d6e5e61d4d9bd7860a9e1f74d7164379919da652\"" Sep 12 17:41:37.202057 containerd[1460]: time="2025-09-12T17:41:37.201879105Z" level=info msg="CreateContainer within sandbox \"fcefd651753ac3cc393f7936d85f8994fb736ec2a92160a1193b68cdf83932e7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"442462c7e4cd5b022b62e86b7ae8434e68abf891b6115c22b7d52a42f0412165\"" Sep 12 17:41:37.202525 containerd[1460]: time="2025-09-12T17:41:37.202491281Z" level=info msg="StartContainer for \"442462c7e4cd5b022b62e86b7ae8434e68abf891b6115c22b7d52a42f0412165\"" Sep 12 17:41:37.219426 kubelet[2191]: E0912 17:41:37.219369 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="3.2s" Sep 12 17:41:37.244039 systemd[1]: Started cri-containerd-442462c7e4cd5b022b62e86b7ae8434e68abf891b6115c22b7d52a42f0412165.scope - libcontainer container 442462c7e4cd5b022b62e86b7ae8434e68abf891b6115c22b7d52a42f0412165. Sep 12 17:41:37.247296 systemd[1]: Started cri-containerd-81ab08ef092aad9ae11060f347430479085fe430d81edc03f4c9817ffb052f45.scope - libcontainer container 81ab08ef092aad9ae11060f347430479085fe430d81edc03f4c9817ffb052f45. Sep 12 17:41:37.254330 systemd[1]: Started cri-containerd-4a1261ba5cd25fd1978fb949d6e5e61d4d9bd7860a9e1f74d7164379919da652.scope - libcontainer container 4a1261ba5cd25fd1978fb949d6e5e61d4d9bd7860a9e1f74d7164379919da652. Sep 12 17:41:37.305386 containerd[1460]: time="2025-09-12T17:41:37.305349931Z" level=info msg="StartContainer for \"442462c7e4cd5b022b62e86b7ae8434e68abf891b6115c22b7d52a42f0412165\" returns successfully" Sep 12 17:41:37.316465 containerd[1460]: time="2025-09-12T17:41:37.316392565Z" level=info msg="StartContainer for \"4a1261ba5cd25fd1978fb949d6e5e61d4d9bd7860a9e1f74d7164379919da652\" returns successfully" Sep 12 17:41:37.320626 containerd[1460]: time="2025-09-12T17:41:37.320580557Z" level=info msg="StartContainer for \"81ab08ef092aad9ae11060f347430479085fe430d81edc03f4c9817ffb052f45\" returns successfully" Sep 12 17:41:38.043155 kubelet[2191]: I0912 17:41:38.043109 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:41:38.292207 kubelet[2191]: E0912 17:41:38.292172 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:41:38.295718 kubelet[2191]: E0912 17:41:38.294981 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:38.297720 kubelet[2191]: E0912 17:41:38.297505 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:41:38.297829 kubelet[2191]: E0912 17:41:38.297688 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:38.301006 kubelet[2191]: E0912 17:41:38.300932 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:41:38.301150 kubelet[2191]: E0912 17:41:38.301077 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:38.622728 kubelet[2191]: I0912 17:41:38.622474 2191 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:41:38.622728 kubelet[2191]: E0912 17:41:38.622532 2191 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:41:38.713737 kubelet[2191]: I0912 17:41:38.713670 2191 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:38.724179 kubelet[2191]: E0912 17:41:38.724079 2191 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:38.724179 kubelet[2191]: I0912 17:41:38.724124 2191 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:38.726807 kubelet[2191]: E0912 17:41:38.726777 2191 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:38.726807 kubelet[2191]: I0912 17:41:38.726802 2191 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:38.728379 kubelet[2191]: E0912 17:41:38.728351 2191 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:39.204820 kubelet[2191]: I0912 17:41:39.204759 2191 apiserver.go:52] "Watching apiserver" Sep 12 17:41:39.210937 kubelet[2191]: I0912 17:41:39.210883 2191 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:41:39.300650 kubelet[2191]: I0912 17:41:39.300599 2191 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:39.300650 kubelet[2191]: I0912 17:41:39.300662 2191 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:39.300961 kubelet[2191]: I0912 17:41:39.300609 2191 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:39.302931 kubelet[2191]: E0912 17:41:39.302899 2191 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:39.303103 kubelet[2191]: E0912 17:41:39.303079 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:39.303180 kubelet[2191]: E0912 17:41:39.303158 2191 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:39.303272 kubelet[2191]: E0912 17:41:39.303251 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:39.303325 kubelet[2191]: E0912 17:41:39.303303 2191 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:39.303502 kubelet[2191]: E0912 17:41:39.303480 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:40.302337 kubelet[2191]: I0912 17:41:40.302299 2191 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:40.302990 kubelet[2191]: I0912 17:41:40.302852 2191 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:40.308297 kubelet[2191]: E0912 17:41:40.308258 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:40.308990 kubelet[2191]: E0912 17:41:40.308955 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:41.304130 kubelet[2191]: E0912 17:41:41.304070 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:41.304574 kubelet[2191]: E0912 17:41:41.304214 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:41.932956 systemd[1]: Reloading requested from client PID 2472 ('systemctl') (unit session-9.scope)... Sep 12 17:41:41.932978 systemd[1]: Reloading... Sep 12 17:41:42.025746 zram_generator::config[2514]: No configuration found. Sep 12 17:41:42.192433 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:42.306572 systemd[1]: Reloading finished in 373 ms. Sep 12 17:41:42.309246 kubelet[2191]: E0912 17:41:42.309226 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:42.349295 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:42.379323 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:41:42.379674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:42.379756 systemd[1]: kubelet.service: Consumed 1.197s CPU time, 134.2M memory peak, 0B memory swap peak. Sep 12 17:41:42.386969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:42.592455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:42.612040 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:41:42.667640 kubelet[2556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:42.667640 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:41:42.667640 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:42.667640 kubelet[2556]: I0912 17:41:42.667282 2556 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:41:42.677760 kubelet[2556]: I0912 17:41:42.676618 2556 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:41:42.677760 kubelet[2556]: I0912 17:41:42.676649 2556 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:41:42.677760 kubelet[2556]: I0912 17:41:42.676966 2556 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:41:42.679102 kubelet[2556]: I0912 17:41:42.679052 2556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:41:42.682076 kubelet[2556]: I0912 17:41:42.682045 2556 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:41:42.691518 kubelet[2556]: E0912 17:41:42.691479 2556 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:41:42.691518 kubelet[2556]: I0912 17:41:42.691517 2556 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:41:42.703790 kubelet[2556]: I0912 17:41:42.703689 2556 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:41:42.704193 kubelet[2556]: I0912 17:41:42.704132 2556 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:41:42.706715 kubelet[2556]: I0912 17:41:42.704191 2556 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:41:42.706715 kubelet[2556]: I0912 17:41:42.704476 2556 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:41:42.706715 kubelet[2556]: I0912 17:41:42.704490 2556 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:41:42.706715 kubelet[2556]: I0912 17:41:42.704561 2556 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:42.706715 kubelet[2556]: I0912 17:41:42.704804 2556 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:41:42.706942 kubelet[2556]: I0912 17:41:42.704834 2556 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:41:42.706942 kubelet[2556]: I0912 17:41:42.704863 2556 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:41:42.706942 kubelet[2556]: I0912 17:41:42.704877 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:41:42.707627 kubelet[2556]: I0912 17:41:42.707605 2556 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:41:42.708277 kubelet[2556]: I0912 17:41:42.708258 2556 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:41:42.709541 kubelet[2556]: I0912 17:41:42.709515 2556 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:41:42.709647 kubelet[2556]: I0912 17:41:42.709636 2556 server.go:1287] "Started kubelet" Sep 12 17:41:42.711687 kubelet[2556]: I0912 17:41:42.711663 2556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:41:42.717347 kubelet[2556]: I0912 17:41:42.717291 2556 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:41:42.718904 kubelet[2556]: I0912 17:41:42.718885 2556 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:41:42.720609 kubelet[2556]: I0912 17:41:42.720567 2556 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:41:42.720979 kubelet[2556]: I0912 17:41:42.720930 2556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:41:42.721362 kubelet[2556]: I0912 17:41:42.721343 2556 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:41:42.725178 kubelet[2556]: I0912 17:41:42.725141 2556 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:41:42.725278 kubelet[2556]: I0912 17:41:42.725258 2556 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:41:42.725438 kubelet[2556]: I0912 17:41:42.725416 2556 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:41:42.726766 kubelet[2556]: E0912 17:41:42.724940 2556 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:42.728101 kubelet[2556]: I0912 17:41:42.728079 2556 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:41:42.728287 kubelet[2556]: I0912 17:41:42.728268 2556 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:41:42.728574 kubelet[2556]: E0912 17:41:42.728556 2556 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:41:42.733907 kubelet[2556]: I0912 17:41:42.733846 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:41:42.734637 kubelet[2556]: I0912 17:41:42.734588 2556 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:41:42.736188 kubelet[2556]: I0912 17:41:42.736152 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:41:42.736254 kubelet[2556]: I0912 17:41:42.736198 2556 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:41:42.736904 kubelet[2556]: I0912 17:41:42.736882 2556 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:41:42.736904 kubelet[2556]: I0912 17:41:42.736899 2556 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:41:42.737032 kubelet[2556]: E0912 17:41:42.736965 2556 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:41:42.774718 sudo[2589]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:41:42.775193 sudo[2589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:41:42.799568 kubelet[2556]: I0912 17:41:42.799503 2556 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:41:42.799568 kubelet[2556]: I0912 17:41:42.799538 2556 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:41:42.799568 kubelet[2556]: I0912 17:41:42.799560 2556 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:42.799862 kubelet[2556]: I0912 17:41:42.799835 2556 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:41:42.799900 kubelet[2556]: I0912 17:41:42.799855 2556 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:41:42.799900 kubelet[2556]: I0912 17:41:42.799878 2556 policy_none.go:49] "None policy: Start" Sep 12 17:41:42.799900 kubelet[2556]: I0912 17:41:42.799891 2556 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:41:42.799993 kubelet[2556]: I0912 17:41:42.799906 2556 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:41:42.800057 kubelet[2556]: I0912 17:41:42.800035 2556 state_mem.go:75] "Updated machine memory state" Sep 12 17:41:42.804780 kubelet[2556]: I0912 17:41:42.804750 2556 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:41:42.805126 kubelet[2556]: I0912 17:41:42.805065 2556 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:41:42.805248 kubelet[2556]: I0912 17:41:42.805083 2556 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:41:42.805671 kubelet[2556]: I0912 17:41:42.805394 2556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:41:42.806645 kubelet[2556]: E0912 17:41:42.806621 2556 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:41:42.838075 kubelet[2556]: I0912 17:41:42.838012 2556 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:42.838628 kubelet[2556]: I0912 17:41:42.838367 2556 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:42.838628 kubelet[2556]: I0912 17:41:42.838413 2556 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:42.912801 kubelet[2556]: I0912 17:41:42.912756 2556 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:41:43.026612 kubelet[2556]: I0912 17:41:43.026558 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93f77d9ce2260f277f61f2c66adb751f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"93f77d9ce2260f277f61f2c66adb751f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:43.026612 kubelet[2556]: I0912 17:41:43.026602 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:43.026612 kubelet[2556]: I0912 17:41:43.026624 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:43.026612 kubelet[2556]: I0912 17:41:43.026644 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:43.026910 kubelet[2556]: I0912 17:41:43.026663 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:43.026910 kubelet[2556]: I0912 17:41:43.026727 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93f77d9ce2260f277f61f2c66adb751f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"93f77d9ce2260f277f61f2c66adb751f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:43.026910 kubelet[2556]: I0912 17:41:43.026803 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93f77d9ce2260f277f61f2c66adb751f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"93f77d9ce2260f277f61f2c66adb751f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:43.026910 kubelet[2556]: I0912 17:41:43.026852 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:43.027041 kubelet[2556]: I0912 17:41:43.026913 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:43.281164 sudo[2589]: pam_unix(sudo:session): session closed for user root Sep 12 17:41:43.386304 kubelet[2556]: E0912 17:41:43.386244 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:43.388279 kubelet[2556]: E0912 17:41:43.388172 2556 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:43.388474 kubelet[2556]: E0912 17:41:43.388451 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:43.388636 kubelet[2556]: E0912 17:41:43.388592 2556 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:43.388828 kubelet[2556]: E0912 17:41:43.388771 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:43.417362 kubelet[2556]: I0912 17:41:43.416791 2556 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 17:41:43.417362 kubelet[2556]: I0912 17:41:43.416901 2556 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:41:43.706101 kubelet[2556]: I0912 17:41:43.706036 2556 apiserver.go:52] "Watching apiserver" Sep 12 17:41:43.726412 kubelet[2556]: I0912 17:41:43.726337 2556 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:41:43.773446 kubelet[2556]: E0912 17:41:43.773396 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:43.773617 kubelet[2556]: E0912 17:41:43.773544 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:43.774120 kubelet[2556]: E0912 17:41:43.774085 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:43.802868 kubelet[2556]: I0912 17:41:43.802800 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.80277612 podStartE2EDuration="3.80277612s" podCreationTimestamp="2025-09-12 17:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:43.793712596 +0000 UTC m=+1.171692893" watchObservedRunningTime="2025-09-12 17:41:43.80277612 +0000 UTC m=+1.180756417" Sep 12 17:41:43.812441 kubelet[2556]: I0912 17:41:43.812393 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.812376537 podStartE2EDuration="3.812376537s" podCreationTimestamp="2025-09-12 17:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:43.80312602 +0000 UTC m=+1.181106317" watchObservedRunningTime="2025-09-12 17:41:43.812376537 +0000 UTC m=+1.190356844" Sep 12 17:41:43.820842 kubelet[2556]: I0912 17:41:43.820784 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.820764968 podStartE2EDuration="1.820764968s" podCreationTimestamp="2025-09-12 17:41:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:43.812594788 +0000 UTC m=+1.190575095" watchObservedRunningTime="2025-09-12 17:41:43.820764968 +0000 UTC m=+1.198745265" Sep 12 17:41:44.723837 sudo[1658]: pam_unix(sudo:session): session closed for user root Sep 12 17:41:44.725918 sshd[1655]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:44.730806 systemd[1]: sshd@8-10.0.0.128:22-10.0.0.1:45270.service: Deactivated successfully. Sep 12 17:41:44.733014 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:41:44.733236 systemd[1]: session-9.scope: Consumed 5.003s CPU time, 159.3M memory peak, 0B memory swap peak. Sep 12 17:41:44.733858 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:41:44.735130 systemd-logind[1444]: Removed session 9. Sep 12 17:41:44.775560 kubelet[2556]: E0912 17:41:44.775505 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:44.775560 kubelet[2556]: E0912 17:41:44.775527 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:46.047153 kubelet[2556]: E0912 17:41:46.047078 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:47.189119 kubelet[2556]: I0912 17:41:47.187267 2556 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:41:47.189900 containerd[1460]: time="2025-09-12T17:41:47.189300564Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:41:47.190268 kubelet[2556]: I0912 17:41:47.189889 2556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:41:48.011009 systemd[1]: Created slice kubepods-besteffort-pod2730883f_b8e2_4fb2_a659_e0f7d33767aa.slice - libcontainer container kubepods-besteffort-pod2730883f_b8e2_4fb2_a659_e0f7d33767aa.slice. Sep 12 17:41:48.014157 kubelet[2556]: W0912 17:41:48.013651 2556 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 12 17:41:48.014157 kubelet[2556]: W0912 17:41:48.014074 2556 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 12 17:41:48.014157 kubelet[2556]: E0912 17:41:48.014105 2556 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 12 17:41:48.014157 kubelet[2556]: W0912 17:41:48.014148 2556 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 12 17:41:48.014157 kubelet[2556]: E0912 17:41:48.014163 2556 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 12 17:41:48.015582 kubelet[2556]: E0912 17:41:48.014476 2556 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 12 17:41:48.025822 systemd[1]: Created slice kubepods-burstable-podcaf90bcc_2d7f_43a6_a5c4_24dc67b3ce37.slice - libcontainer container kubepods-burstable-podcaf90bcc_2d7f_43a6_a5c4_24dc67b3ce37.slice. Sep 12 17:41:48.058650 kubelet[2556]: I0912 17:41:48.058585 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cilium-config-path\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.058851 kubelet[2556]: I0912 17:41:48.058683 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-host-proc-sys-kernel\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.058851 kubelet[2556]: I0912 17:41:48.058722 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2730883f-b8e2-4fb2-a659-e0f7d33767aa-xtables-lock\") pod \"kube-proxy-dr9pq\" (UID: \"2730883f-b8e2-4fb2-a659-e0f7d33767aa\") " pod="kube-system/kube-proxy-dr9pq" Sep 12 17:41:48.058851 kubelet[2556]: I0912 17:41:48.058740 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2730883f-b8e2-4fb2-a659-e0f7d33767aa-lib-modules\") pod \"kube-proxy-dr9pq\" (UID: \"2730883f-b8e2-4fb2-a659-e0f7d33767aa\") " pod="kube-system/kube-proxy-dr9pq" Sep 12 17:41:48.058851 kubelet[2556]: I0912 17:41:48.058758 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-clustermesh-secrets\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.058851 kubelet[2556]: I0912 17:41:48.058776 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhc2m\" (UniqueName: \"kubernetes.io/projected/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-kube-api-access-vhc2m\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.059015 kubelet[2556]: I0912 17:41:48.058794 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-host-proc-sys-net\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.059015 kubelet[2556]: I0912 17:41:48.058812 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-hubble-tls\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.059015 kubelet[2556]: I0912 17:41:48.058829 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cilium-run\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.059015 kubelet[2556]: I0912 17:41:48.058849 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-bpf-maps\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.059015 kubelet[2556]: I0912 17:41:48.058866 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-hostproc\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.059015 kubelet[2556]: I0912 17:41:48.058883 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cilium-cgroup\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.059188 kubelet[2556]: I0912 17:41:48.058899 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cni-path\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.059188 kubelet[2556]: I0912 17:41:48.058915 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-xtables-lock\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.059188 kubelet[2556]: I0912 17:41:48.058940 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2730883f-b8e2-4fb2-a659-e0f7d33767aa-kube-proxy\") pod \"kube-proxy-dr9pq\" (UID: \"2730883f-b8e2-4fb2-a659-e0f7d33767aa\") " pod="kube-system/kube-proxy-dr9pq" Sep 12 17:41:48.059188 kubelet[2556]: I0912 17:41:48.058959 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-etc-cni-netd\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.059188 kubelet[2556]: I0912 17:41:48.058978 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-lib-modules\") pod \"cilium-fdpks\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " pod="kube-system/cilium-fdpks" Sep 12 17:41:48.059188 kubelet[2556]: I0912 17:41:48.059007 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26pbg\" (UniqueName: \"kubernetes.io/projected/2730883f-b8e2-4fb2-a659-e0f7d33767aa-kube-api-access-26pbg\") pod \"kube-proxy-dr9pq\" (UID: \"2730883f-b8e2-4fb2-a659-e0f7d33767aa\") " pod="kube-system/kube-proxy-dr9pq" Sep 12 17:41:48.321721 kubelet[2556]: E0912 17:41:48.321556 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:48.322893 containerd[1460]: time="2025-09-12T17:41:48.322863472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dr9pq,Uid:2730883f-b8e2-4fb2-a659-e0f7d33767aa,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:48.327876 systemd[1]: Created slice kubepods-besteffort-podea05bf95_248f_46c7_a2db_08c6ad9ea3d0.slice - libcontainer container kubepods-besteffort-podea05bf95_248f_46c7_a2db_08c6ad9ea3d0.slice. Sep 12 17:41:48.360346 kubelet[2556]: I0912 17:41:48.360284 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnkgz\" (UniqueName: \"kubernetes.io/projected/ea05bf95-248f-46c7-a2db-08c6ad9ea3d0-kube-api-access-vnkgz\") pod \"cilium-operator-6c4d7847fc-8fvsh\" (UID: \"ea05bf95-248f-46c7-a2db-08c6ad9ea3d0\") " pod="kube-system/cilium-operator-6c4d7847fc-8fvsh" Sep 12 17:41:48.360346 kubelet[2556]: I0912 17:41:48.360327 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea05bf95-248f-46c7-a2db-08c6ad9ea3d0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8fvsh\" (UID: \"ea05bf95-248f-46c7-a2db-08c6ad9ea3d0\") " pod="kube-system/cilium-operator-6c4d7847fc-8fvsh" Sep 12 17:41:48.430282 containerd[1460]: time="2025-09-12T17:41:48.429317249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:48.430563 containerd[1460]: time="2025-09-12T17:41:48.430341377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:48.430563 containerd[1460]: time="2025-09-12T17:41:48.430390239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:48.430711 containerd[1460]: time="2025-09-12T17:41:48.430604703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:48.462947 systemd[1]: Started cri-containerd-c3cfd8b7d7d4388538507ae8c9855591fc99152796d1a2c73131d2fd2b386aae.scope - libcontainer container c3cfd8b7d7d4388538507ae8c9855591fc99152796d1a2c73131d2fd2b386aae. Sep 12 17:41:48.493659 containerd[1460]: time="2025-09-12T17:41:48.493605515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dr9pq,Uid:2730883f-b8e2-4fb2-a659-e0f7d33767aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3cfd8b7d7d4388538507ae8c9855591fc99152796d1a2c73131d2fd2b386aae\"" Sep 12 17:41:48.494496 kubelet[2556]: E0912 17:41:48.494471 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:48.496766 containerd[1460]: time="2025-09-12T17:41:48.496684690Z" level=info msg="CreateContainer within sandbox \"c3cfd8b7d7d4388538507ae8c9855591fc99152796d1a2c73131d2fd2b386aae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:41:48.528063 containerd[1460]: time="2025-09-12T17:41:48.528004337Z" level=info msg="CreateContainer within sandbox \"c3cfd8b7d7d4388538507ae8c9855591fc99152796d1a2c73131d2fd2b386aae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2dace0174209754f1f55c302550843e215fb5a6efd63f839fb0b52808eb4129f\"" Sep 12 17:41:48.528769 containerd[1460]: time="2025-09-12T17:41:48.528745331Z" level=info msg="StartContainer for \"2dace0174209754f1f55c302550843e215fb5a6efd63f839fb0b52808eb4129f\"" Sep 12 17:41:48.565850 systemd[1]: Started cri-containerd-2dace0174209754f1f55c302550843e215fb5a6efd63f839fb0b52808eb4129f.scope - libcontainer container 2dace0174209754f1f55c302550843e215fb5a6efd63f839fb0b52808eb4129f. Sep 12 17:41:48.639738 containerd[1460]: time="2025-09-12T17:41:48.638907470Z" level=info msg="StartContainer for \"2dace0174209754f1f55c302550843e215fb5a6efd63f839fb0b52808eb4129f\" returns successfully" Sep 12 17:41:48.785979 kubelet[2556]: E0912 17:41:48.785939 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:49.004945 kubelet[2556]: E0912 17:41:49.004899 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:49.020194 kubelet[2556]: I0912 17:41:49.020129 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dr9pq" podStartSLOduration=2.020107368 podStartE2EDuration="2.020107368s" podCreationTimestamp="2025-09-12 17:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:48.806323672 +0000 UTC m=+6.184303969" watchObservedRunningTime="2025-09-12 17:41:49.020107368 +0000 UTC m=+6.398087665" Sep 12 17:41:49.160861 kubelet[2556]: E0912 17:41:49.160801 2556 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 12 17:41:49.160861 kubelet[2556]: E0912 17:41:49.160845 2556 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-fdpks: failed to sync secret cache: timed out waiting for the condition Sep 12 17:41:49.161083 kubelet[2556]: E0912 17:41:49.160932 2556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-hubble-tls podName:caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37 nodeName:}" failed. No retries permitted until 2025-09-12 17:41:49.660904304 +0000 UTC m=+7.038884601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-hubble-tls") pod "cilium-fdpks" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37") : failed to sync secret cache: timed out waiting for the condition Sep 12 17:41:49.230983 kubelet[2556]: E0912 17:41:49.230914 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:49.231618 containerd[1460]: time="2025-09-12T17:41:49.231559685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8fvsh,Uid:ea05bf95-248f-46c7-a2db-08c6ad9ea3d0,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:49.262618 containerd[1460]: time="2025-09-12T17:41:49.262419482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:49.262618 containerd[1460]: time="2025-09-12T17:41:49.262490636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:49.262618 containerd[1460]: time="2025-09-12T17:41:49.262504302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:49.262822 containerd[1460]: time="2025-09-12T17:41:49.262623176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:49.293869 systemd[1]: Started cri-containerd-82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4.scope - libcontainer container 82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4. Sep 12 17:41:49.338986 containerd[1460]: time="2025-09-12T17:41:49.338918131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8fvsh,Uid:ea05bf95-248f-46c7-a2db-08c6ad9ea3d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4\"" Sep 12 17:41:49.339997 kubelet[2556]: E0912 17:41:49.339671 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:49.341550 containerd[1460]: time="2025-09-12T17:41:49.341501022Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:41:49.788393 kubelet[2556]: E0912 17:41:49.788348 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:49.831066 kubelet[2556]: E0912 17:41:49.831019 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:49.833729 containerd[1460]: time="2025-09-12T17:41:49.832078886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fdpks,Uid:caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:50.083206 containerd[1460]: time="2025-09-12T17:41:50.082573047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:50.083206 containerd[1460]: time="2025-09-12T17:41:50.082648409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:50.083206 containerd[1460]: time="2025-09-12T17:41:50.082662836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:50.083206 containerd[1460]: time="2025-09-12T17:41:50.082815904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:50.106930 systemd[1]: Started cri-containerd-5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9.scope - libcontainer container 5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9. Sep 12 17:41:50.136412 containerd[1460]: time="2025-09-12T17:41:50.136339142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fdpks,Uid:caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37,Namespace:kube-system,Attempt:0,} returns sandbox id \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\"" Sep 12 17:41:50.138577 kubelet[2556]: E0912 17:41:50.138536 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:50.791763 kubelet[2556]: E0912 17:41:50.791727 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:51.225309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3002907948.mount: Deactivated successfully. Sep 12 17:41:51.426643 kubelet[2556]: E0912 17:41:51.426599 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:51.610953 containerd[1460]: time="2025-09-12T17:41:51.610833418Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:51.611602 containerd[1460]: time="2025-09-12T17:41:51.611561337Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 17:41:51.612629 containerd[1460]: time="2025-09-12T17:41:51.612599170Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:51.614068 containerd[1460]: time="2025-09-12T17:41:51.614036704Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.272498942s" Sep 12 17:41:51.614112 containerd[1460]: time="2025-09-12T17:41:51.614069776Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 17:41:51.615717 containerd[1460]: time="2025-09-12T17:41:51.615547907Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:41:51.619161 containerd[1460]: time="2025-09-12T17:41:51.618934417Z" level=info msg="CreateContainer within sandbox \"82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:41:51.632441 containerd[1460]: time="2025-09-12T17:41:51.632394810Z" level=info msg="CreateContainer within sandbox \"82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\"" Sep 12 17:41:51.632975 containerd[1460]: time="2025-09-12T17:41:51.632955474Z" level=info msg="StartContainer for \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\"" Sep 12 17:41:51.663890 systemd[1]: Started cri-containerd-aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365.scope - libcontainer container aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365. Sep 12 17:41:51.693753 containerd[1460]: time="2025-09-12T17:41:51.693689377Z" level=info msg="StartContainer for \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\" returns successfully" Sep 12 17:41:51.798721 kubelet[2556]: E0912 17:41:51.796062 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:51.798721 kubelet[2556]: E0912 17:41:51.796091 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:51.832754 kubelet[2556]: I0912 17:41:51.832651 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8fvsh" podStartSLOduration=1.558608337 podStartE2EDuration="3.832631898s" podCreationTimestamp="2025-09-12 17:41:48 +0000 UTC" firstStartedPulling="2025-09-12 17:41:49.340857411 +0000 UTC m=+6.718837708" lastFinishedPulling="2025-09-12 17:41:51.614880972 +0000 UTC m=+8.992861269" observedRunningTime="2025-09-12 17:41:51.811865943 +0000 UTC m=+9.189846250" watchObservedRunningTime="2025-09-12 17:41:51.832631898 +0000 UTC m=+9.210612195" Sep 12 17:41:52.797869 kubelet[2556]: E0912 17:41:52.797804 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:52.799089 kubelet[2556]: E0912 17:41:52.798366 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:56.052130 kubelet[2556]: E0912 17:41:56.052082 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:57.734838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477276360.mount: Deactivated successfully. Sep 12 17:42:02.076642 containerd[1460]: time="2025-09-12T17:42:02.076539241Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:02.078464 containerd[1460]: time="2025-09-12T17:42:02.078349602Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 17:42:02.080988 containerd[1460]: time="2025-09-12T17:42:02.080874815Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:02.082387 containerd[1460]: time="2025-09-12T17:42:02.082340317Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.466761693s" Sep 12 17:42:02.082466 containerd[1460]: time="2025-09-12T17:42:02.082388758Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 17:42:02.095049 containerd[1460]: time="2025-09-12T17:42:02.094936810Z" level=info msg="CreateContainer within sandbox \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:42:02.113348 containerd[1460]: time="2025-09-12T17:42:02.113274104Z" level=info msg="CreateContainer within sandbox \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa\"" Sep 12 17:42:02.113997 containerd[1460]: time="2025-09-12T17:42:02.113938092Z" level=info msg="StartContainer for \"af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa\"" Sep 12 17:42:02.152981 systemd[1]: Started cri-containerd-af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa.scope - libcontainer container af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa. Sep 12 17:42:02.218254 containerd[1460]: time="2025-09-12T17:42:02.218185672Z" level=info msg="StartContainer for \"af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa\" returns successfully" Sep 12 17:42:02.230047 systemd[1]: cri-containerd-af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa.scope: Deactivated successfully. Sep 12 17:42:02.740454 containerd[1460]: time="2025-09-12T17:42:02.737997757Z" level=info msg="shim disconnected" id=af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa namespace=k8s.io Sep 12 17:42:02.740454 containerd[1460]: time="2025-09-12T17:42:02.740430216Z" level=warning msg="cleaning up after shim disconnected" id=af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa namespace=k8s.io Sep 12 17:42:02.740454 containerd[1460]: time="2025-09-12T17:42:02.740442859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:42:02.920564 kubelet[2556]: E0912 17:42:02.920508 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:02.922870 containerd[1460]: time="2025-09-12T17:42:02.922822054Z" level=info msg="CreateContainer within sandbox \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:42:02.940466 containerd[1460]: time="2025-09-12T17:42:02.940407627Z" level=info msg="CreateContainer within sandbox \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd\"" Sep 12 17:42:02.940945 containerd[1460]: time="2025-09-12T17:42:02.940915461Z" level=info msg="StartContainer for \"e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd\"" Sep 12 17:42:02.972862 systemd[1]: Started cri-containerd-e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd.scope - libcontainer container e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd. Sep 12 17:42:03.003252 containerd[1460]: time="2025-09-12T17:42:03.003100591Z" level=info msg="StartContainer for \"e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd\" returns successfully" Sep 12 17:42:03.017093 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:42:03.017377 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:42:03.017470 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:42:03.025146 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:42:03.025433 systemd[1]: cri-containerd-e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd.scope: Deactivated successfully. Sep 12 17:42:03.053812 containerd[1460]: time="2025-09-12T17:42:03.053724024Z" level=info msg="shim disconnected" id=e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd namespace=k8s.io Sep 12 17:42:03.053812 containerd[1460]: time="2025-09-12T17:42:03.053794516Z" level=warning msg="cleaning up after shim disconnected" id=e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd namespace=k8s.io Sep 12 17:42:03.053812 containerd[1460]: time="2025-09-12T17:42:03.053805997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:42:03.055343 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:42:03.106996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa-rootfs.mount: Deactivated successfully. Sep 12 17:42:03.923396 kubelet[2556]: E0912 17:42:03.923356 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:03.924998 containerd[1460]: time="2025-09-12T17:42:03.924947668Z" level=info msg="CreateContainer within sandbox \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:42:03.995392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4068898918.mount: Deactivated successfully. Sep 12 17:42:04.004693 containerd[1460]: time="2025-09-12T17:42:04.004629981Z" level=info msg="CreateContainer within sandbox \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3\"" Sep 12 17:42:04.005245 containerd[1460]: time="2025-09-12T17:42:04.005195914Z" level=info msg="StartContainer for \"f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3\"" Sep 12 17:42:04.037927 systemd[1]: Started cri-containerd-f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3.scope - libcontainer container f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3. Sep 12 17:42:04.075232 systemd[1]: cri-containerd-f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3.scope: Deactivated successfully. Sep 12 17:42:04.076477 containerd[1460]: time="2025-09-12T17:42:04.076440528Z" level=info msg="StartContainer for \"f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3\" returns successfully" Sep 12 17:42:04.106369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3-rootfs.mount: Deactivated successfully. Sep 12 17:42:04.107887 containerd[1460]: time="2025-09-12T17:42:04.107814691Z" level=info msg="shim disconnected" id=f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3 namespace=k8s.io Sep 12 17:42:04.107992 containerd[1460]: time="2025-09-12T17:42:04.107890223Z" level=warning msg="cleaning up after shim disconnected" id=f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3 namespace=k8s.io Sep 12 17:42:04.107992 containerd[1460]: time="2025-09-12T17:42:04.107907385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:42:04.927546 kubelet[2556]: E0912 17:42:04.927481 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:04.929544 containerd[1460]: time="2025-09-12T17:42:04.929390555Z" level=info msg="CreateContainer within sandbox \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:42:04.948172 containerd[1460]: time="2025-09-12T17:42:04.948104913Z" level=info msg="CreateContainer within sandbox \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051\"" Sep 12 17:42:04.949742 containerd[1460]: time="2025-09-12T17:42:04.948765323Z" level=info msg="StartContainer for \"7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051\"" Sep 12 17:42:04.980843 systemd[1]: Started cri-containerd-7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051.scope - libcontainer container 7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051. Sep 12 17:42:05.017569 systemd[1]: cri-containerd-7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051.scope: Deactivated successfully. Sep 12 17:42:05.026464 containerd[1460]: time="2025-09-12T17:42:05.026409852Z" level=info msg="StartContainer for \"7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051\" returns successfully" Sep 12 17:42:05.096040 containerd[1460]: time="2025-09-12T17:42:05.095206679Z" level=info msg="shim disconnected" id=7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051 namespace=k8s.io Sep 12 17:42:05.096040 containerd[1460]: time="2025-09-12T17:42:05.095267854Z" level=warning msg="cleaning up after shim disconnected" id=7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051 namespace=k8s.io Sep 12 17:42:05.096040 containerd[1460]: time="2025-09-12T17:42:05.095277672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:42:05.123449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051-rootfs.mount: Deactivated successfully. Sep 12 17:42:05.949487 kubelet[2556]: E0912 17:42:05.949424 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:05.958096 containerd[1460]: time="2025-09-12T17:42:05.956953641Z" level=info msg="CreateContainer within sandbox \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:42:06.019177 containerd[1460]: time="2025-09-12T17:42:06.018686129Z" level=info msg="CreateContainer within sandbox \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\"" Sep 12 17:42:06.020319 containerd[1460]: time="2025-09-12T17:42:06.020257249Z" level=info msg="StartContainer for \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\"" Sep 12 17:42:06.079283 systemd[1]: Started cri-containerd-e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d.scope - libcontainer container e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d. Sep 12 17:42:06.121677 containerd[1460]: time="2025-09-12T17:42:06.121616251Z" level=info msg="StartContainer for \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\" returns successfully" Sep 12 17:42:06.273415 kubelet[2556]: I0912 17:42:06.273252 2556 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:42:06.323807 systemd[1]: Created slice kubepods-burstable-pod659ba262_52ad_460a_80d2_23a2dfaece4d.slice - libcontainer container kubepods-burstable-pod659ba262_52ad_460a_80d2_23a2dfaece4d.slice. Sep 12 17:42:06.334962 systemd[1]: Created slice kubepods-burstable-pod5442db6f_5efb_49e0_918c_fc5d4b2754e9.slice - libcontainer container kubepods-burstable-pod5442db6f_5efb_49e0_918c_fc5d4b2754e9.slice. Sep 12 17:42:06.479599 kubelet[2556]: I0912 17:42:06.479517 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5djhm\" (UniqueName: \"kubernetes.io/projected/659ba262-52ad-460a-80d2-23a2dfaece4d-kube-api-access-5djhm\") pod \"coredns-668d6bf9bc-kkdwk\" (UID: \"659ba262-52ad-460a-80d2-23a2dfaece4d\") " pod="kube-system/coredns-668d6bf9bc-kkdwk" Sep 12 17:42:06.479599 kubelet[2556]: I0912 17:42:06.479581 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xptp\" (UniqueName: \"kubernetes.io/projected/5442db6f-5efb-49e0-918c-fc5d4b2754e9-kube-api-access-2xptp\") pod \"coredns-668d6bf9bc-pr24j\" (UID: \"5442db6f-5efb-49e0-918c-fc5d4b2754e9\") " pod="kube-system/coredns-668d6bf9bc-pr24j" Sep 12 17:42:06.479599 kubelet[2556]: I0912 17:42:06.479603 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5442db6f-5efb-49e0-918c-fc5d4b2754e9-config-volume\") pod \"coredns-668d6bf9bc-pr24j\" (UID: \"5442db6f-5efb-49e0-918c-fc5d4b2754e9\") " pod="kube-system/coredns-668d6bf9bc-pr24j" Sep 12 17:42:06.479599 kubelet[2556]: I0912 17:42:06.479622 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/659ba262-52ad-460a-80d2-23a2dfaece4d-config-volume\") pod \"coredns-668d6bf9bc-kkdwk\" (UID: \"659ba262-52ad-460a-80d2-23a2dfaece4d\") " pod="kube-system/coredns-668d6bf9bc-kkdwk" Sep 12 17:42:06.631119 kubelet[2556]: E0912 17:42:06.630953 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:06.632139 containerd[1460]: time="2025-09-12T17:42:06.632082266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kkdwk,Uid:659ba262-52ad-460a-80d2-23a2dfaece4d,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:06.639578 kubelet[2556]: E0912 17:42:06.639535 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:06.641300 containerd[1460]: time="2025-09-12T17:42:06.641228250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pr24j,Uid:5442db6f-5efb-49e0-918c-fc5d4b2754e9,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:06.948817 kubelet[2556]: E0912 17:42:06.948716 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:06.964208 kubelet[2556]: I0912 17:42:06.964123 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fdpks" podStartSLOduration=8.017863961 podStartE2EDuration="19.96410033s" podCreationTimestamp="2025-09-12 17:41:47 +0000 UTC" firstStartedPulling="2025-09-12 17:41:50.139161252 +0000 UTC m=+7.517141549" lastFinishedPulling="2025-09-12 17:42:02.085397621 +0000 UTC m=+19.463377918" observedRunningTime="2025-09-12 17:42:06.963816568 +0000 UTC m=+24.341796865" watchObservedRunningTime="2025-09-12 17:42:06.96410033 +0000 UTC m=+24.342080627" Sep 12 17:42:07.951195 kubelet[2556]: E0912 17:42:07.951144 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:08.452198 systemd-networkd[1392]: cilium_host: Link UP Sep 12 17:42:08.452420 systemd-networkd[1392]: cilium_net: Link UP Sep 12 17:42:08.452671 systemd-networkd[1392]: cilium_net: Gained carrier Sep 12 17:42:08.452926 systemd-networkd[1392]: cilium_host: Gained carrier Sep 12 17:42:08.480951 systemd-networkd[1392]: cilium_host: Gained IPv6LL Sep 12 17:42:08.601292 systemd-networkd[1392]: cilium_vxlan: Link UP Sep 12 17:42:08.601607 systemd-networkd[1392]: cilium_vxlan: Gained carrier Sep 12 17:42:08.893738 kernel: NET: Registered PF_ALG protocol family Sep 12 17:42:08.935866 systemd-networkd[1392]: cilium_net: Gained IPv6LL Sep 12 17:42:08.953475 kubelet[2556]: E0912 17:42:08.953430 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:09.721290 systemd-networkd[1392]: lxc_health: Link UP Sep 12 17:42:09.731999 systemd-networkd[1392]: lxc_health: Gained carrier Sep 12 17:42:09.857022 systemd-networkd[1392]: cilium_vxlan: Gained IPv6LL Sep 12 17:42:09.955678 kubelet[2556]: E0912 17:42:09.955630 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:10.283839 systemd-networkd[1392]: lxcea442f12f40e: Link UP Sep 12 17:42:10.309759 kernel: eth0: renamed from tmp99d74 Sep 12 17:42:10.318652 systemd-networkd[1392]: lxc0924238e99db: Link UP Sep 12 17:42:10.319618 systemd-networkd[1392]: lxcea442f12f40e: Gained carrier Sep 12 17:42:10.332731 kernel: eth0: renamed from tmp71d73 Sep 12 17:42:10.343841 systemd-networkd[1392]: lxc0924238e99db: Gained carrier Sep 12 17:42:10.424956 systemd[1]: Started sshd@9-10.0.0.128:22-10.0.0.1:53166.service - OpenSSH per-connection server daemon (10.0.0.1:53166). Sep 12 17:42:10.488201 sshd[3760]: Accepted publickey for core from 10.0.0.1 port 53166 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:10.491380 sshd[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:10.499318 systemd-logind[1444]: New session 10 of user core. Sep 12 17:42:10.506344 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:42:10.880002 systemd-networkd[1392]: lxc_health: Gained IPv6LL Sep 12 17:42:11.135688 sshd[3760]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:11.140909 systemd[1]: sshd@9-10.0.0.128:22-10.0.0.1:53166.service: Deactivated successfully. Sep 12 17:42:11.143594 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:42:11.144441 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:42:11.145412 systemd-logind[1444]: Removed session 10. Sep 12 17:42:12.100803 systemd-networkd[1392]: lxcea442f12f40e: Gained IPv6LL Sep 12 17:42:12.163088 systemd-networkd[1392]: lxc0924238e99db: Gained IPv6LL Sep 12 17:42:14.706334 kubelet[2556]: I0912 17:42:14.706174 2556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:42:14.707273 kubelet[2556]: E0912 17:42:14.706783 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:14.972899 kubelet[2556]: E0912 17:42:14.971094 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:15.029574 containerd[1460]: time="2025-09-12T17:42:15.029376267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:15.029574 containerd[1460]: time="2025-09-12T17:42:15.029488798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:15.029574 containerd[1460]: time="2025-09-12T17:42:15.029505720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:15.030114 containerd[1460]: time="2025-09-12T17:42:15.029633942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:15.059936 systemd[1]: Started cri-containerd-99d7409c4e47ab99a87c034443d48fcdbf0bacfe1598f898f394f2b0c6ff0ced.scope - libcontainer container 99d7409c4e47ab99a87c034443d48fcdbf0bacfe1598f898f394f2b0c6ff0ced. Sep 12 17:42:15.085533 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:42:15.089727 containerd[1460]: time="2025-09-12T17:42:15.089520161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:15.089727 containerd[1460]: time="2025-09-12T17:42:15.089677095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:15.089938 containerd[1460]: time="2025-09-12T17:42:15.089752948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:15.091735 containerd[1460]: time="2025-09-12T17:42:15.089961188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:15.115998 systemd[1]: Started cri-containerd-71d73731748ea478c2270a285b140463117bf5a614f45df7c9b817b5298bfdc1.scope - libcontainer container 71d73731748ea478c2270a285b140463117bf5a614f45df7c9b817b5298bfdc1. Sep 12 17:42:15.125042 containerd[1460]: time="2025-09-12T17:42:15.124948786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kkdwk,Uid:659ba262-52ad-460a-80d2-23a2dfaece4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"99d7409c4e47ab99a87c034443d48fcdbf0bacfe1598f898f394f2b0c6ff0ced\"" Sep 12 17:42:15.127074 kubelet[2556]: E0912 17:42:15.126941 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:15.137452 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:42:15.147833 containerd[1460]: time="2025-09-12T17:42:15.147748659Z" level=info msg="CreateContainer within sandbox \"99d7409c4e47ab99a87c034443d48fcdbf0bacfe1598f898f394f2b0c6ff0ced\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:42:15.174766 containerd[1460]: time="2025-09-12T17:42:15.174283311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pr24j,Uid:5442db6f-5efb-49e0-918c-fc5d4b2754e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"71d73731748ea478c2270a285b140463117bf5a614f45df7c9b817b5298bfdc1\"" Sep 12 17:42:15.176192 kubelet[2556]: E0912 17:42:15.175946 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:15.179964 containerd[1460]: time="2025-09-12T17:42:15.179860909Z" level=info msg="CreateContainer within sandbox \"71d73731748ea478c2270a285b140463117bf5a614f45df7c9b817b5298bfdc1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:42:15.214665 containerd[1460]: time="2025-09-12T17:42:15.213391433Z" level=info msg="CreateContainer within sandbox \"99d7409c4e47ab99a87c034443d48fcdbf0bacfe1598f898f394f2b0c6ff0ced\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9cc48d7d912420db8d74d397d38661a4629d30e268823c65c2f57d2939679094\"" Sep 12 17:42:15.215869 containerd[1460]: time="2025-09-12T17:42:15.215809000Z" level=info msg="StartContainer for \"9cc48d7d912420db8d74d397d38661a4629d30e268823c65c2f57d2939679094\"" Sep 12 17:42:15.218541 containerd[1460]: time="2025-09-12T17:42:15.218461589Z" level=info msg="CreateContainer within sandbox \"71d73731748ea478c2270a285b140463117bf5a614f45df7c9b817b5298bfdc1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a83a461c03ab90e185d601d4836e7240e20f163bdca6cc95bf0ffaee94346b4\"" Sep 12 17:42:15.219995 containerd[1460]: time="2025-09-12T17:42:15.219476013Z" level=info msg="StartContainer for \"8a83a461c03ab90e185d601d4836e7240e20f163bdca6cc95bf0ffaee94346b4\"" Sep 12 17:42:15.265211 systemd[1]: Started cri-containerd-8a83a461c03ab90e185d601d4836e7240e20f163bdca6cc95bf0ffaee94346b4.scope - libcontainer container 8a83a461c03ab90e185d601d4836e7240e20f163bdca6cc95bf0ffaee94346b4. Sep 12 17:42:15.269103 systemd[1]: Started cri-containerd-9cc48d7d912420db8d74d397d38661a4629d30e268823c65c2f57d2939679094.scope - libcontainer container 9cc48d7d912420db8d74d397d38661a4629d30e268823c65c2f57d2939679094. Sep 12 17:42:15.315563 containerd[1460]: time="2025-09-12T17:42:15.315507362Z" level=info msg="StartContainer for \"9cc48d7d912420db8d74d397d38661a4629d30e268823c65c2f57d2939679094\" returns successfully" Sep 12 17:42:15.315796 containerd[1460]: time="2025-09-12T17:42:15.315509346Z" level=info msg="StartContainer for \"8a83a461c03ab90e185d601d4836e7240e20f163bdca6cc95bf0ffaee94346b4\" returns successfully" Sep 12 17:42:15.975643 kubelet[2556]: E0912 17:42:15.974931 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:15.979253 kubelet[2556]: E0912 17:42:15.979134 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:15.998671 kubelet[2556]: I0912 17:42:15.998597 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pr24j" podStartSLOduration=27.99856964 podStartE2EDuration="27.99856964s" podCreationTimestamp="2025-09-12 17:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:15.987405166 +0000 UTC m=+33.365385463" watchObservedRunningTime="2025-09-12 17:42:15.99856964 +0000 UTC m=+33.376549937" Sep 12 17:42:16.012838 kubelet[2556]: I0912 17:42:16.012294 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kkdwk" podStartSLOduration=28.012249384 podStartE2EDuration="28.012249384s" podCreationTimestamp="2025-09-12 17:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:15.999430285 +0000 UTC m=+33.377410592" watchObservedRunningTime="2025-09-12 17:42:16.012249384 +0000 UTC m=+33.390229691" Sep 12 17:42:16.041808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2548825594.mount: Deactivated successfully. Sep 12 17:42:16.151801 systemd[1]: Started sshd@10-10.0.0.128:22-10.0.0.1:53172.service - OpenSSH per-connection server daemon (10.0.0.1:53172). Sep 12 17:42:16.206854 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 53172 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:16.209340 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:16.215218 systemd-logind[1444]: New session 11 of user core. Sep 12 17:42:16.228067 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:42:16.457090 sshd[3960]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:16.465858 systemd[1]: sshd@10-10.0.0.128:22-10.0.0.1:53172.service: Deactivated successfully. Sep 12 17:42:16.469956 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:42:16.475963 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:42:16.483644 systemd-logind[1444]: Removed session 11. Sep 12 17:42:16.987586 kubelet[2556]: E0912 17:42:16.987136 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:16.988402 kubelet[2556]: E0912 17:42:16.988254 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:17.989403 kubelet[2556]: E0912 17:42:17.989347 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:17.990016 kubelet[2556]: E0912 17:42:17.989593 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:21.478262 systemd[1]: Started sshd@11-10.0.0.128:22-10.0.0.1:54128.service - OpenSSH per-connection server daemon (10.0.0.1:54128). Sep 12 17:42:21.551613 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 54128 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:21.554201 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:21.564687 systemd-logind[1444]: New session 12 of user core. Sep 12 17:42:21.578090 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:42:21.810138 sshd[3977]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:21.816460 systemd[1]: sshd@11-10.0.0.128:22-10.0.0.1:54128.service: Deactivated successfully. Sep 12 17:42:21.822193 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:42:21.825986 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:42:21.828875 systemd-logind[1444]: Removed session 12. Sep 12 17:42:26.871379 systemd[1]: Started sshd@12-10.0.0.128:22-10.0.0.1:54136.service - OpenSSH per-connection server daemon (10.0.0.1:54136). Sep 12 17:42:26.939231 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 54136 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:26.944974 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:26.983479 systemd-logind[1444]: New session 13 of user core. Sep 12 17:42:27.004461 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:42:27.313768 sshd[3993]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:27.319615 systemd[1]: sshd@12-10.0.0.128:22-10.0.0.1:54136.service: Deactivated successfully. Sep 12 17:42:27.331528 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:42:27.345991 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:42:27.353267 systemd-logind[1444]: Removed session 13. Sep 12 17:42:32.329486 systemd[1]: Started sshd@13-10.0.0.128:22-10.0.0.1:40006.service - OpenSSH per-connection server daemon (10.0.0.1:40006). Sep 12 17:42:32.379929 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 40006 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:32.381776 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:32.386349 systemd-logind[1444]: New session 14 of user core. Sep 12 17:42:32.398930 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:42:32.517908 sshd[4008]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:32.523383 systemd[1]: sshd@13-10.0.0.128:22-10.0.0.1:40006.service: Deactivated successfully. Sep 12 17:42:32.526298 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:42:32.527266 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:42:32.528445 systemd-logind[1444]: Removed session 14. Sep 12 17:42:37.574249 systemd[1]: Started sshd@14-10.0.0.128:22-10.0.0.1:40008.service - OpenSSH per-connection server daemon (10.0.0.1:40008). Sep 12 17:42:37.640580 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 40008 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:37.641545 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:37.664132 systemd-logind[1444]: New session 15 of user core. Sep 12 17:42:37.672747 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:42:37.844381 sshd[4023]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:37.861790 systemd[1]: sshd@14-10.0.0.128:22-10.0.0.1:40008.service: Deactivated successfully. Sep 12 17:42:37.864267 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:42:37.866618 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:42:37.876253 systemd[1]: Started sshd@15-10.0.0.128:22-10.0.0.1:40016.service - OpenSSH per-connection server daemon (10.0.0.1:40016). Sep 12 17:42:37.877759 systemd-logind[1444]: Removed session 15. Sep 12 17:42:37.912175 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 40016 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:37.916129 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:37.922260 systemd-logind[1444]: New session 16 of user core. Sep 12 17:42:37.933065 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:42:38.116105 sshd[4040]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:38.127212 systemd[1]: sshd@15-10.0.0.128:22-10.0.0.1:40016.service: Deactivated successfully. Sep 12 17:42:38.130507 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:42:38.132760 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:42:38.144786 systemd[1]: Started sshd@16-10.0.0.128:22-10.0.0.1:40032.service - OpenSSH per-connection server daemon (10.0.0.1:40032). Sep 12 17:42:38.149608 systemd-logind[1444]: Removed session 16. Sep 12 17:42:38.176689 sshd[4053]: Accepted publickey for core from 10.0.0.1 port 40032 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:38.178425 sshd[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:38.183225 systemd-logind[1444]: New session 17 of user core. Sep 12 17:42:38.193072 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:42:38.306821 sshd[4053]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:38.311873 systemd[1]: sshd@16-10.0.0.128:22-10.0.0.1:40032.service: Deactivated successfully. Sep 12 17:42:38.314444 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:42:38.315178 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:42:38.316153 systemd-logind[1444]: Removed session 17. Sep 12 17:42:43.322491 systemd[1]: Started sshd@17-10.0.0.128:22-10.0.0.1:59386.service - OpenSSH per-connection server daemon (10.0.0.1:59386). Sep 12 17:42:43.379894 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 59386 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:43.381956 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:43.386798 systemd-logind[1444]: New session 18 of user core. Sep 12 17:42:43.393872 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:42:43.516866 sshd[4069]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:43.521568 systemd[1]: sshd@17-10.0.0.128:22-10.0.0.1:59386.service: Deactivated successfully. Sep 12 17:42:43.523886 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:42:43.524671 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:42:43.525781 systemd-logind[1444]: Removed session 18. Sep 12 17:42:48.531757 systemd[1]: Started sshd@18-10.0.0.128:22-10.0.0.1:59398.service - OpenSSH per-connection server daemon (10.0.0.1:59398). Sep 12 17:42:48.570407 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 59398 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:48.572154 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:48.576739 systemd-logind[1444]: New session 19 of user core. Sep 12 17:42:48.582840 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:42:48.701783 sshd[4086]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:48.706479 systemd[1]: sshd@18-10.0.0.128:22-10.0.0.1:59398.service: Deactivated successfully. Sep 12 17:42:48.709151 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:42:48.709944 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:42:48.711046 systemd-logind[1444]: Removed session 19. Sep 12 17:42:53.714080 systemd[1]: Started sshd@19-10.0.0.128:22-10.0.0.1:60846.service - OpenSSH per-connection server daemon (10.0.0.1:60846). Sep 12 17:42:53.754634 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 60846 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:53.756567 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:53.761124 systemd-logind[1444]: New session 20 of user core. Sep 12 17:42:53.772884 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:42:53.898660 sshd[4102]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:53.912080 systemd[1]: sshd@19-10.0.0.128:22-10.0.0.1:60846.service: Deactivated successfully. Sep 12 17:42:53.914255 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:42:53.916066 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:42:53.921955 systemd[1]: Started sshd@20-10.0.0.128:22-10.0.0.1:60860.service - OpenSSH per-connection server daemon (10.0.0.1:60860). Sep 12 17:42:53.923005 systemd-logind[1444]: Removed session 20. Sep 12 17:42:53.954004 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 60860 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:53.955890 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:53.961206 systemd-logind[1444]: New session 21 of user core. Sep 12 17:42:53.971007 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:42:54.218119 sshd[4116]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:54.230209 systemd[1]: sshd@20-10.0.0.128:22-10.0.0.1:60860.service: Deactivated successfully. Sep 12 17:42:54.232477 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:42:54.234381 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:42:54.246177 systemd[1]: Started sshd@21-10.0.0.128:22-10.0.0.1:60870.service - OpenSSH per-connection server daemon (10.0.0.1:60870). Sep 12 17:42:54.247399 systemd-logind[1444]: Removed session 21. Sep 12 17:42:54.282034 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 60870 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:54.284175 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:54.289150 systemd-logind[1444]: New session 22 of user core. Sep 12 17:42:54.298964 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:42:54.945316 sshd[4128]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:54.962071 systemd[1]: sshd@21-10.0.0.128:22-10.0.0.1:60870.service: Deactivated successfully. Sep 12 17:42:54.964887 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:42:54.965909 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:42:54.984051 systemd[1]: Started sshd@22-10.0.0.128:22-10.0.0.1:60880.service - OpenSSH per-connection server daemon (10.0.0.1:60880). Sep 12 17:42:54.987317 systemd-logind[1444]: Removed session 22. Sep 12 17:42:55.024618 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 60880 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:55.026466 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:55.031186 systemd-logind[1444]: New session 23 of user core. Sep 12 17:42:55.040836 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:42:55.745509 sshd[4148]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:55.756514 systemd[1]: sshd@22-10.0.0.128:22-10.0.0.1:60880.service: Deactivated successfully. Sep 12 17:42:55.758935 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:42:55.761089 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:42:55.772014 systemd[1]: Started sshd@23-10.0.0.128:22-10.0.0.1:60882.service - OpenSSH per-connection server daemon (10.0.0.1:60882). Sep 12 17:42:55.773038 systemd-logind[1444]: Removed session 23. Sep 12 17:42:55.806269 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 60882 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:42:55.808375 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:55.814026 systemd-logind[1444]: New session 24 of user core. Sep 12 17:42:55.818872 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:42:55.940669 sshd[4161]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:55.945344 systemd[1]: sshd@23-10.0.0.128:22-10.0.0.1:60882.service: Deactivated successfully. Sep 12 17:42:55.947952 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:42:55.948853 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:42:55.949861 systemd-logind[1444]: Removed session 24. Sep 12 17:42:56.738358 kubelet[2556]: E0912 17:42:56.738287 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:00.953247 systemd[1]: Started sshd@24-10.0.0.128:22-10.0.0.1:52364.service - OpenSSH per-connection server daemon (10.0.0.1:52364). Sep 12 17:43:00.993121 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 52364 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:43:00.995098 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:00.999769 systemd-logind[1444]: New session 25 of user core. Sep 12 17:43:01.007841 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:43:01.151783 sshd[4175]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:01.156937 systemd[1]: sshd@24-10.0.0.128:22-10.0.0.1:52364.service: Deactivated successfully. Sep 12 17:43:01.159330 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:43:01.160124 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:43:01.161184 systemd-logind[1444]: Removed session 25. Sep 12 17:43:01.737879 kubelet[2556]: E0912 17:43:01.737809 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:06.165160 systemd[1]: Started sshd@25-10.0.0.128:22-10.0.0.1:52370.service - OpenSSH per-connection server daemon (10.0.0.1:52370). Sep 12 17:43:06.202875 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 52370 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:43:06.204754 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:06.209276 systemd-logind[1444]: New session 26 of user core. Sep 12 17:43:06.220893 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:43:06.335644 sshd[4191]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:06.340246 systemd[1]: sshd@25-10.0.0.128:22-10.0.0.1:52370.service: Deactivated successfully. Sep 12 17:43:06.342560 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:43:06.343269 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:43:06.344311 systemd-logind[1444]: Removed session 26. Sep 12 17:43:11.351189 systemd[1]: Started sshd@26-10.0.0.128:22-10.0.0.1:43486.service - OpenSSH per-connection server daemon (10.0.0.1:43486). Sep 12 17:43:11.387843 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 43486 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:43:11.389759 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:11.393906 systemd-logind[1444]: New session 27 of user core. Sep 12 17:43:11.401818 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:43:11.515484 sshd[4205]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:11.519818 systemd[1]: sshd@26-10.0.0.128:22-10.0.0.1:43486.service: Deactivated successfully. Sep 12 17:43:11.521953 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:43:11.523034 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:43:11.524132 systemd-logind[1444]: Removed session 27. Sep 12 17:43:16.532011 systemd[1]: Started sshd@27-10.0.0.128:22-10.0.0.1:43488.service - OpenSSH per-connection server daemon (10.0.0.1:43488). Sep 12 17:43:16.570341 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 43488 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:43:16.572212 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:16.577373 systemd-logind[1444]: New session 28 of user core. Sep 12 17:43:16.585926 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:43:16.697217 sshd[4219]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:16.712009 systemd[1]: sshd@27-10.0.0.128:22-10.0.0.1:43488.service: Deactivated successfully. Sep 12 17:43:16.714659 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:43:16.716962 systemd-logind[1444]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:43:16.727357 systemd[1]: Started sshd@28-10.0.0.128:22-10.0.0.1:43492.service - OpenSSH per-connection server daemon (10.0.0.1:43492). Sep 12 17:43:16.728901 systemd-logind[1444]: Removed session 28. Sep 12 17:43:16.757534 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 43492 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:43:16.759350 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:16.765095 systemd-logind[1444]: New session 29 of user core. Sep 12 17:43:16.773913 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 17:43:17.742325 kubelet[2556]: E0912 17:43:17.742220 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:18.133227 containerd[1460]: time="2025-09-12T17:43:18.133053986Z" level=info msg="StopContainer for \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\" with timeout 30 (s)" Sep 12 17:43:18.133735 containerd[1460]: time="2025-09-12T17:43:18.133649875Z" level=info msg="Stop container \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\" with signal terminated" Sep 12 17:43:18.170986 systemd[1]: cri-containerd-aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365.scope: Deactivated successfully. Sep 12 17:43:18.197609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365-rootfs.mount: Deactivated successfully. Sep 12 17:43:18.206036 containerd[1460]: time="2025-09-12T17:43:18.205933776Z" level=info msg="shim disconnected" id=aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365 namespace=k8s.io Sep 12 17:43:18.206036 containerd[1460]: time="2025-09-12T17:43:18.206014209Z" level=warning msg="cleaning up after shim disconnected" id=aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365 namespace=k8s.io Sep 12 17:43:18.206036 containerd[1460]: time="2025-09-12T17:43:18.206023296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:43:18.224496 containerd[1460]: time="2025-09-12T17:43:18.224432850Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:43:18.228044 containerd[1460]: time="2025-09-12T17:43:18.227964238Z" level=info msg="StopContainer for \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\" returns successfully" Sep 12 17:43:18.233031 containerd[1460]: time="2025-09-12T17:43:18.232850060Z" level=info msg="StopPodSandbox for \"82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4\"" Sep 12 17:43:18.233031 containerd[1460]: time="2025-09-12T17:43:18.232900396Z" level=info msg="Container to stop \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:43:18.234683 containerd[1460]: time="2025-09-12T17:43:18.234640291Z" level=info msg="StopContainer for \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\" with timeout 2 (s)" Sep 12 17:43:18.235093 containerd[1460]: time="2025-09-12T17:43:18.235059164Z" level=info msg="Stop container \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\" with signal terminated" Sep 12 17:43:18.235269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4-shm.mount: Deactivated successfully. Sep 12 17:43:18.241366 systemd[1]: cri-containerd-82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4.scope: Deactivated successfully. Sep 12 17:43:18.244469 systemd-networkd[1392]: lxc_health: Link DOWN Sep 12 17:43:18.244481 systemd-networkd[1392]: lxc_health: Lost carrier Sep 12 17:43:18.271274 systemd[1]: cri-containerd-e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d.scope: Deactivated successfully. Sep 12 17:43:18.272905 systemd[1]: cri-containerd-e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d.scope: Consumed 8.619s CPU time. Sep 12 17:43:18.290138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4-rootfs.mount: Deactivated successfully. Sep 12 17:43:18.299862 containerd[1460]: time="2025-09-12T17:43:18.299768833Z" level=info msg="shim disconnected" id=82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4 namespace=k8s.io Sep 12 17:43:18.299862 containerd[1460]: time="2025-09-12T17:43:18.299836100Z" level=warning msg="cleaning up after shim disconnected" id=82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4 namespace=k8s.io Sep 12 17:43:18.299862 containerd[1460]: time="2025-09-12T17:43:18.299854354Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:43:18.311465 containerd[1460]: time="2025-09-12T17:43:18.311385502Z" level=info msg="shim disconnected" id=e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d namespace=k8s.io Sep 12 17:43:18.311465 containerd[1460]: time="2025-09-12T17:43:18.311454062Z" level=warning msg="cleaning up after shim disconnected" id=e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d namespace=k8s.io Sep 12 17:43:18.311465 containerd[1460]: time="2025-09-12T17:43:18.311469932Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:43:18.339570 containerd[1460]: time="2025-09-12T17:43:18.339479397Z" level=info msg="TearDown network for sandbox \"82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4\" successfully" Sep 12 17:43:18.339570 containerd[1460]: time="2025-09-12T17:43:18.339546484Z" level=info msg="StopPodSandbox for \"82e6945f9c3f3da5b46f763aaa775041b9d611f47f0f89a33bff57fd767464c4\" returns successfully" Sep 12 17:43:18.343409 containerd[1460]: time="2025-09-12T17:43:18.343348043Z" level=info msg="StopContainer for \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\" returns successfully" Sep 12 17:43:18.344051 containerd[1460]: time="2025-09-12T17:43:18.343998165Z" level=info msg="StopPodSandbox for \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\"" Sep 12 17:43:18.344211 containerd[1460]: time="2025-09-12T17:43:18.344064069Z" level=info msg="Container to stop \"7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:43:18.344211 containerd[1460]: time="2025-09-12T17:43:18.344082193Z" level=info msg="Container to stop \"af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:43:18.344211 containerd[1460]: time="2025-09-12T17:43:18.344109956Z" level=info msg="Container to stop \"e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:43:18.344211 containerd[1460]: time="2025-09-12T17:43:18.344123322Z" level=info msg="Container to stop \"f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:43:18.344211 containerd[1460]: time="2025-09-12T17:43:18.344135855Z" level=info msg="Container to stop \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:43:18.352166 systemd[1]: cri-containerd-5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9.scope: Deactivated successfully. Sep 12 17:43:18.382399 containerd[1460]: time="2025-09-12T17:43:18.382064064Z" level=info msg="shim disconnected" id=5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9 namespace=k8s.io Sep 12 17:43:18.382399 containerd[1460]: time="2025-09-12T17:43:18.382134978Z" level=warning msg="cleaning up after shim disconnected" id=5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9 namespace=k8s.io Sep 12 17:43:18.382399 containerd[1460]: time="2025-09-12T17:43:18.382143775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:43:18.398164 containerd[1460]: time="2025-09-12T17:43:18.397989383Z" level=info msg="TearDown network for sandbox \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\" successfully" Sep 12 17:43:18.398164 containerd[1460]: time="2025-09-12T17:43:18.398028137Z" level=info msg="StopPodSandbox for \"5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9\" returns successfully" Sep 12 17:43:18.542872 kubelet[2556]: I0912 17:43:18.542770 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cilium-run\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.542872 kubelet[2556]: I0912 17:43:18.542852 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cilium-cgroup\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.542872 kubelet[2556]: I0912 17:43:18.542871 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-xtables-lock\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.542872 kubelet[2556]: I0912 17:43:18.542888 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-host-proc-sys-kernel\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.543260 kubelet[2556]: I0912 17:43:18.542912 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-etc-cni-netd\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.543260 kubelet[2556]: I0912 17:43:18.542894 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:43:18.543260 kubelet[2556]: I0912 17:43:18.542943 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea05bf95-248f-46c7-a2db-08c6ad9ea3d0-cilium-config-path\") pod \"ea05bf95-248f-46c7-a2db-08c6ad9ea3d0\" (UID: \"ea05bf95-248f-46c7-a2db-08c6ad9ea3d0\") " Sep 12 17:43:18.543260 kubelet[2556]: I0912 17:43:18.542963 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cilium-config-path\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.543260 kubelet[2556]: I0912 17:43:18.542980 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cni-path\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.543412 kubelet[2556]: I0912 17:43:18.542990 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:43:18.543412 kubelet[2556]: I0912 17:43:18.543011 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-hubble-tls\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.543412 kubelet[2556]: I0912 17:43:18.543016 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:43:18.543412 kubelet[2556]: I0912 17:43:18.543028 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-bpf-maps\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.543412 kubelet[2556]: I0912 17:43:18.543040 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:43:18.543566 kubelet[2556]: I0912 17:43:18.543049 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-hostproc\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.543566 kubelet[2556]: I0912 17:43:18.543069 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnkgz\" (UniqueName: \"kubernetes.io/projected/ea05bf95-248f-46c7-a2db-08c6ad9ea3d0-kube-api-access-vnkgz\") pod \"ea05bf95-248f-46c7-a2db-08c6ad9ea3d0\" (UID: \"ea05bf95-248f-46c7-a2db-08c6ad9ea3d0\") " Sep 12 17:43:18.543566 kubelet[2556]: I0912 17:43:18.543102 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhc2m\" (UniqueName: \"kubernetes.io/projected/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-kube-api-access-vhc2m\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.543566 kubelet[2556]: I0912 17:43:18.543128 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-clustermesh-secrets\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.543566 kubelet[2556]: I0912 17:43:18.543133 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cni-path" (OuterVolumeSpecName: "cni-path") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:43:18.543566 kubelet[2556]: I0912 17:43:18.543150 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-lib-modules\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.543776 kubelet[2556]: I0912 17:43:18.543157 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:43:18.543776 kubelet[2556]: I0912 17:43:18.543173 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-host-proc-sys-net\") pod \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\" (UID: \"caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37\") " Sep 12 17:43:18.543776 kubelet[2556]: I0912 17:43:18.543226 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.543776 kubelet[2556]: I0912 17:43:18.543240 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.543776 kubelet[2556]: I0912 17:43:18.543252 2556 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.543776 kubelet[2556]: I0912 17:43:18.543263 2556 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.548358 kubelet[2556]: I0912 17:43:18.547983 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:43:18.548358 kubelet[2556]: I0912 17:43:18.548154 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-kube-api-access-vhc2m" (OuterVolumeSpecName: "kube-api-access-vhc2m") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "kube-api-access-vhc2m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:43:18.548358 kubelet[2556]: I0912 17:43:18.548199 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:43:18.548358 kubelet[2556]: I0912 17:43:18.548224 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:43:18.548358 kubelet[2556]: I0912 17:43:18.548244 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-hostproc" (OuterVolumeSpecName: "hostproc") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:43:18.548566 kubelet[2556]: I0912 17:43:18.548263 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:43:18.548566 kubelet[2556]: I0912 17:43:18.548266 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea05bf95-248f-46c7-a2db-08c6ad9ea3d0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ea05bf95-248f-46c7-a2db-08c6ad9ea3d0" (UID: "ea05bf95-248f-46c7-a2db-08c6ad9ea3d0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:43:18.550872 kubelet[2556]: I0912 17:43:18.550832 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea05bf95-248f-46c7-a2db-08c6ad9ea3d0-kube-api-access-vnkgz" (OuterVolumeSpecName: "kube-api-access-vnkgz") pod "ea05bf95-248f-46c7-a2db-08c6ad9ea3d0" (UID: "ea05bf95-248f-46c7-a2db-08c6ad9ea3d0"). InnerVolumeSpecName "kube-api-access-vnkgz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:43:18.550872 kubelet[2556]: I0912 17:43:18.550840 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:43:18.550959 kubelet[2556]: I0912 17:43:18.550909 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" (UID: "caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:43:18.644277 kubelet[2556]: I0912 17:43:18.644198 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.644277 kubelet[2556]: I0912 17:43:18.644253 2556 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.644277 kubelet[2556]: I0912 17:43:18.644280 2556 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.644277 kubelet[2556]: I0912 17:43:18.644292 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea05bf95-248f-46c7-a2db-08c6ad9ea3d0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.644539 kubelet[2556]: I0912 17:43:18.644303 2556 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vhc2m\" (UniqueName: \"kubernetes.io/projected/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-kube-api-access-vhc2m\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.644539 kubelet[2556]: I0912 17:43:18.644315 2556 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.644539 kubelet[2556]: I0912 17:43:18.644325 2556 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.644539 kubelet[2556]: I0912 17:43:18.644336 2556 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.644539 kubelet[2556]: I0912 17:43:18.644346 2556 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vnkgz\" (UniqueName: \"kubernetes.io/projected/ea05bf95-248f-46c7-a2db-08c6ad9ea3d0-kube-api-access-vnkgz\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.644539 kubelet[2556]: I0912 17:43:18.644356 2556 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.644539 kubelet[2556]: I0912 17:43:18.644366 2556 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.644539 kubelet[2556]: I0912 17:43:18.644376 2556 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 17:43:18.738805 kubelet[2556]: E0912 17:43:18.738677 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:18.748648 systemd[1]: Removed slice kubepods-besteffort-podea05bf95_248f_46c7_a2db_08c6ad9ea3d0.slice - libcontainer container kubepods-besteffort-podea05bf95_248f_46c7_a2db_08c6ad9ea3d0.slice. Sep 12 17:43:18.749934 systemd[1]: Removed slice kubepods-burstable-podcaf90bcc_2d7f_43a6_a5c4_24dc67b3ce37.slice - libcontainer container kubepods-burstable-podcaf90bcc_2d7f_43a6_a5c4_24dc67b3ce37.slice. Sep 12 17:43:18.750055 systemd[1]: kubepods-burstable-podcaf90bcc_2d7f_43a6_a5c4_24dc67b3ce37.slice: Consumed 8.766s CPU time. Sep 12 17:43:19.192997 kubelet[2556]: I0912 17:43:19.192488 2556 scope.go:117] "RemoveContainer" containerID="aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365" Sep 12 17:43:19.197770 containerd[1460]: time="2025-09-12T17:43:19.194836727Z" level=info msg="RemoveContainer for \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\"" Sep 12 17:43:19.197637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d-rootfs.mount: Deactivated successfully. Sep 12 17:43:19.199938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9-rootfs.mount: Deactivated successfully. Sep 12 17:43:19.201190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5add004cbeaf20f68b1da04b683ef30ba4d554e19fe4855879ba7b56af2e3bb9-shm.mount: Deactivated successfully. Sep 12 17:43:19.201417 systemd[1]: var-lib-kubelet-pods-caf90bcc\x2d2d7f\x2d43a6\x2da5c4\x2d24dc67b3ce37-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:43:19.201521 systemd[1]: var-lib-kubelet-pods-caf90bcc\x2d2d7f\x2d43a6\x2da5c4\x2d24dc67b3ce37-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:43:19.201608 systemd[1]: var-lib-kubelet-pods-ea05bf95\x2d248f\x2d46c7\x2da2db\x2d08c6ad9ea3d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvnkgz.mount: Deactivated successfully. Sep 12 17:43:19.201713 systemd[1]: var-lib-kubelet-pods-caf90bcc\x2d2d7f\x2d43a6\x2da5c4\x2d24dc67b3ce37-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvhc2m.mount: Deactivated successfully. Sep 12 17:43:19.210023 containerd[1460]: time="2025-09-12T17:43:19.209981914Z" level=info msg="RemoveContainer for \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\" returns successfully" Sep 12 17:43:19.210312 kubelet[2556]: I0912 17:43:19.210278 2556 scope.go:117] "RemoveContainer" containerID="aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365" Sep 12 17:43:19.216290 containerd[1460]: time="2025-09-12T17:43:19.216222711Z" level=error msg="ContainerStatus for \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\": not found" Sep 12 17:43:19.233475 kubelet[2556]: E0912 17:43:19.233421 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\": not found" containerID="aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365" Sep 12 17:43:19.233645 kubelet[2556]: I0912 17:43:19.233475 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365"} err="failed to get container status \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\": rpc error: code = NotFound desc = an error occurred when try to find container \"aed8bc7993f5988ddd02ba1908c7cc8cd4296ea7dac67d9c04acfc4813781365\": not found" Sep 12 17:43:19.233645 kubelet[2556]: I0912 17:43:19.233574 2556 scope.go:117] "RemoveContainer" containerID="e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d" Sep 12 17:43:19.234955 containerd[1460]: time="2025-09-12T17:43:19.234919804Z" level=info msg="RemoveContainer for \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\"" Sep 12 17:43:19.238636 containerd[1460]: time="2025-09-12T17:43:19.238599983Z" level=info msg="RemoveContainer for \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\" returns successfully" Sep 12 17:43:19.238822 kubelet[2556]: I0912 17:43:19.238780 2556 scope.go:117] "RemoveContainer" containerID="7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051" Sep 12 17:43:19.239744 containerd[1460]: time="2025-09-12T17:43:19.239709603Z" level=info msg="RemoveContainer for \"7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051\"" Sep 12 17:43:19.243039 containerd[1460]: time="2025-09-12T17:43:19.243002328Z" level=info msg="RemoveContainer for \"7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051\" returns successfully" Sep 12 17:43:19.243217 kubelet[2556]: I0912 17:43:19.243187 2556 scope.go:117] "RemoveContainer" containerID="f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3" Sep 12 17:43:19.244208 containerd[1460]: time="2025-09-12T17:43:19.244159400Z" level=info msg="RemoveContainer for \"f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3\"" Sep 12 17:43:19.251565 containerd[1460]: time="2025-09-12T17:43:19.251524886Z" level=info msg="RemoveContainer for \"f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3\" returns successfully" Sep 12 17:43:19.251728 kubelet[2556]: I0912 17:43:19.251684 2556 scope.go:117] "RemoveContainer" containerID="e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd" Sep 12 17:43:19.252629 containerd[1460]: time="2025-09-12T17:43:19.252598328Z" level=info msg="RemoveContainer for \"e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd\"" Sep 12 17:43:19.255786 containerd[1460]: time="2025-09-12T17:43:19.255751579Z" level=info msg="RemoveContainer for \"e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd\" returns successfully" Sep 12 17:43:19.255937 kubelet[2556]: I0912 17:43:19.255917 2556 scope.go:117] "RemoveContainer" containerID="af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa" Sep 12 17:43:19.257063 containerd[1460]: time="2025-09-12T17:43:19.256838847Z" level=info msg="RemoveContainer for \"af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa\"" Sep 12 17:43:19.259894 containerd[1460]: time="2025-09-12T17:43:19.259853946Z" level=info msg="RemoveContainer for \"af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa\" returns successfully" Sep 12 17:43:19.260051 kubelet[2556]: I0912 17:43:19.260004 2556 scope.go:117] "RemoveContainer" containerID="e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d" Sep 12 17:43:19.260239 containerd[1460]: time="2025-09-12T17:43:19.260192938Z" level=error msg="ContainerStatus for \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\": not found" Sep 12 17:43:19.260364 kubelet[2556]: E0912 17:43:19.260331 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\": not found" containerID="e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d" Sep 12 17:43:19.260410 kubelet[2556]: I0912 17:43:19.260362 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d"} err="failed to get container status \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0ad4911b3e01e2f1d0cc5ffee67a9f0b51558f5acd4e5bcbc8c8c13c5a70a4d\": not found" Sep 12 17:43:19.260410 kubelet[2556]: I0912 17:43:19.260385 2556 scope.go:117] "RemoveContainer" containerID="7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051" Sep 12 17:43:19.260598 containerd[1460]: time="2025-09-12T17:43:19.260557429Z" level=error msg="ContainerStatus for \"7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051\": not found" Sep 12 17:43:19.260788 kubelet[2556]: E0912 17:43:19.260738 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051\": not found" containerID="7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051" Sep 12 17:43:19.260854 kubelet[2556]: I0912 17:43:19.260800 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051"} err="failed to get container status \"7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b7b8a47152e718aea4d658c12c835af4502d87e6c61ad9c47d5b9d0ae24d051\": not found" Sep 12 17:43:19.260854 kubelet[2556]: I0912 17:43:19.260840 2556 scope.go:117] "RemoveContainer" containerID="f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3" Sep 12 17:43:19.261103 containerd[1460]: time="2025-09-12T17:43:19.261057054Z" level=error msg="ContainerStatus for \"f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3\": not found" Sep 12 17:43:19.261226 kubelet[2556]: E0912 17:43:19.261204 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3\": not found" containerID="f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3" Sep 12 17:43:19.261268 kubelet[2556]: I0912 17:43:19.261231 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3"} err="failed to get container status \"f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f33f4fc6205b3f2958c3dd0fc421375ec628cb250229afeabdd191507c9e78d3\": not found" Sep 12 17:43:19.261268 kubelet[2556]: I0912 17:43:19.261257 2556 scope.go:117] "RemoveContainer" containerID="e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd" Sep 12 17:43:19.261463 containerd[1460]: time="2025-09-12T17:43:19.261426514Z" level=error msg="ContainerStatus for \"e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd\": not found" Sep 12 17:43:19.261559 kubelet[2556]: E0912 17:43:19.261538 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd\": not found" containerID="e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd" Sep 12 17:43:19.261615 kubelet[2556]: I0912 17:43:19.261561 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd"} err="failed to get container status \"e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"e95a610ed38366aa8a89e1098bef6e607ff51cffd8ac1c2b90644e90c55ae1dd\": not found" Sep 12 17:43:19.261615 kubelet[2556]: I0912 17:43:19.261579 2556 scope.go:117] "RemoveContainer" containerID="af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa" Sep 12 17:43:19.261813 containerd[1460]: time="2025-09-12T17:43:19.261778791Z" level=error msg="ContainerStatus for \"af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa\": not found" Sep 12 17:43:19.261929 kubelet[2556]: E0912 17:43:19.261908 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa\": not found" containerID="af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa" Sep 12 17:43:19.261977 kubelet[2556]: I0912 17:43:19.261933 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa"} err="failed to get container status \"af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"af3e5f569dadc52f665aabe6cc0b3038af55d084027a81fb9102caf8e9b605aa\": not found" Sep 12 17:43:20.094648 sshd[4235]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:20.107330 systemd[1]: sshd@28-10.0.0.128:22-10.0.0.1:43492.service: Deactivated successfully. Sep 12 17:43:20.110573 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 17:43:20.112136 systemd-logind[1444]: Session 29 logged out. Waiting for processes to exit. Sep 12 17:43:20.120013 systemd[1]: Started sshd@29-10.0.0.128:22-10.0.0.1:52270.service - OpenSSH per-connection server daemon (10.0.0.1:52270). Sep 12 17:43:20.121278 systemd-logind[1444]: Removed session 29. Sep 12 17:43:20.153212 sshd[4398]: Accepted publickey for core from 10.0.0.1 port 52270 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:43:20.155114 sshd[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:20.160263 systemd-logind[1444]: New session 30 of user core. Sep 12 17:43:20.170983 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 12 17:43:20.737876 kubelet[2556]: E0912 17:43:20.737823 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:20.740284 kubelet[2556]: I0912 17:43:20.740254 2556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" path="/var/lib/kubelet/pods/caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37/volumes" Sep 12 17:43:20.741163 kubelet[2556]: I0912 17:43:20.741132 2556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea05bf95-248f-46c7-a2db-08c6ad9ea3d0" path="/var/lib/kubelet/pods/ea05bf95-248f-46c7-a2db-08c6ad9ea3d0/volumes" Sep 12 17:43:21.257569 sshd[4398]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:21.270540 systemd[1]: sshd@29-10.0.0.128:22-10.0.0.1:52270.service: Deactivated successfully. Sep 12 17:43:21.272307 systemd[1]: session-30.scope: Deactivated successfully. Sep 12 17:43:21.273953 systemd-logind[1444]: Session 30 logged out. Waiting for processes to exit. Sep 12 17:43:21.275610 systemd[1]: Started sshd@30-10.0.0.128:22-10.0.0.1:52286.service - OpenSSH per-connection server daemon (10.0.0.1:52286). Sep 12 17:43:21.276591 systemd-logind[1444]: Removed session 30. Sep 12 17:43:21.319217 sshd[4412]: Accepted publickey for core from 10.0.0.1 port 52286 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:43:21.320888 sshd[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:21.325475 systemd-logind[1444]: New session 31 of user core. Sep 12 17:43:21.331873 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 12 17:43:21.387409 sshd[4412]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:21.395903 kubelet[2556]: I0912 17:43:21.395857 2556 memory_manager.go:355] "RemoveStaleState removing state" podUID="caf90bcc-2d7f-43a6-a5c4-24dc67b3ce37" containerName="cilium-agent" Sep 12 17:43:21.395903 kubelet[2556]: I0912 17:43:21.395897 2556 memory_manager.go:355] "RemoveStaleState removing state" podUID="ea05bf95-248f-46c7-a2db-08c6ad9ea3d0" containerName="cilium-operator" Sep 12 17:43:21.399667 systemd[1]: sshd@30-10.0.0.128:22-10.0.0.1:52286.service: Deactivated successfully. Sep 12 17:43:21.403126 systemd[1]: session-31.scope: Deactivated successfully. Sep 12 17:43:21.405244 systemd-logind[1444]: Session 31 logged out. Waiting for processes to exit. Sep 12 17:43:21.416067 systemd[1]: Started sshd@31-10.0.0.128:22-10.0.0.1:52298.service - OpenSSH per-connection server daemon (10.0.0.1:52298). Sep 12 17:43:21.418096 systemd-logind[1444]: Removed session 31. Sep 12 17:43:21.422095 systemd[1]: Created slice kubepods-burstable-podb311fa20_8936_428e_890c_1af30efd64dd.slice - libcontainer container kubepods-burstable-podb311fa20_8936_428e_890c_1af30efd64dd.slice. Sep 12 17:43:21.451351 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 52298 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:43:21.453302 sshd[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:21.456773 kubelet[2556]: I0912 17:43:21.456736 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b311fa20-8936-428e-890c-1af30efd64dd-host-proc-sys-kernel\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.456898 kubelet[2556]: I0912 17:43:21.456783 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7pdj\" (UniqueName: \"kubernetes.io/projected/b311fa20-8936-428e-890c-1af30efd64dd-kube-api-access-f7pdj\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.456898 kubelet[2556]: I0912 17:43:21.456820 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b311fa20-8936-428e-890c-1af30efd64dd-cilium-run\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.456898 kubelet[2556]: I0912 17:43:21.456847 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b311fa20-8936-428e-890c-1af30efd64dd-bpf-maps\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.457029 kubelet[2556]: I0912 17:43:21.456924 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b311fa20-8936-428e-890c-1af30efd64dd-cilium-cgroup\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.457029 kubelet[2556]: I0912 17:43:21.456964 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b311fa20-8936-428e-890c-1af30efd64dd-xtables-lock\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.457029 kubelet[2556]: I0912 17:43:21.456989 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b311fa20-8936-428e-890c-1af30efd64dd-cilium-config-path\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.457029 kubelet[2556]: I0912 17:43:21.457011 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b311fa20-8936-428e-890c-1af30efd64dd-lib-modules\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.457029 kubelet[2556]: I0912 17:43:21.457028 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b311fa20-8936-428e-890c-1af30efd64dd-hubble-tls\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.457197 kubelet[2556]: I0912 17:43:21.457046 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b311fa20-8936-428e-890c-1af30efd64dd-hostproc\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.457197 kubelet[2556]: I0912 17:43:21.457083 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b311fa20-8936-428e-890c-1af30efd64dd-clustermesh-secrets\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.457197 kubelet[2556]: I0912 17:43:21.457105 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b311fa20-8936-428e-890c-1af30efd64dd-host-proc-sys-net\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.457197 kubelet[2556]: I0912 17:43:21.457122 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b311fa20-8936-428e-890c-1af30efd64dd-cni-path\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.457197 kubelet[2556]: I0912 17:43:21.457138 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b311fa20-8936-428e-890c-1af30efd64dd-etc-cni-netd\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.457197 kubelet[2556]: I0912 17:43:21.457155 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b311fa20-8936-428e-890c-1af30efd64dd-cilium-ipsec-secrets\") pod \"cilium-kq2k2\" (UID: \"b311fa20-8936-428e-890c-1af30efd64dd\") " pod="kube-system/cilium-kq2k2" Sep 12 17:43:21.457848 systemd-logind[1444]: New session 32 of user core. Sep 12 17:43:21.467854 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 12 17:43:21.732190 kubelet[2556]: E0912 17:43:21.732133 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:21.733133 containerd[1460]: time="2025-09-12T17:43:21.733006273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kq2k2,Uid:b311fa20-8936-428e-890c-1af30efd64dd,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:21.775940 containerd[1460]: time="2025-09-12T17:43:21.775813973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:43:21.775940 containerd[1460]: time="2025-09-12T17:43:21.775898783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:43:21.775940 containerd[1460]: time="2025-09-12T17:43:21.775913121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:21.776150 containerd[1460]: time="2025-09-12T17:43:21.776035963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:21.796874 systemd[1]: Started cri-containerd-b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec.scope - libcontainer container b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec. Sep 12 17:43:21.823676 containerd[1460]: time="2025-09-12T17:43:21.823615705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kq2k2,Uid:b311fa20-8936-428e-890c-1af30efd64dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec\"" Sep 12 17:43:21.824555 kubelet[2556]: E0912 17:43:21.824525 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:21.826683 containerd[1460]: time="2025-09-12T17:43:21.826528558Z" level=info msg="CreateContainer within sandbox \"b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:43:21.844527 containerd[1460]: time="2025-09-12T17:43:21.844464269Z" level=info msg="CreateContainer within sandbox \"b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"13a18c780dd72baf7e7a4ece32b327f09ac7f14fa57d37df63d0b44670fc5e2a\"" Sep 12 17:43:21.845110 containerd[1460]: time="2025-09-12T17:43:21.845076888Z" level=info msg="StartContainer for \"13a18c780dd72baf7e7a4ece32b327f09ac7f14fa57d37df63d0b44670fc5e2a\"" Sep 12 17:43:21.874889 systemd[1]: Started cri-containerd-13a18c780dd72baf7e7a4ece32b327f09ac7f14fa57d37df63d0b44670fc5e2a.scope - libcontainer container 13a18c780dd72baf7e7a4ece32b327f09ac7f14fa57d37df63d0b44670fc5e2a. Sep 12 17:43:21.906916 containerd[1460]: time="2025-09-12T17:43:21.906855908Z" level=info msg="StartContainer for \"13a18c780dd72baf7e7a4ece32b327f09ac7f14fa57d37df63d0b44670fc5e2a\" returns successfully" Sep 12 17:43:21.919014 systemd[1]: cri-containerd-13a18c780dd72baf7e7a4ece32b327f09ac7f14fa57d37df63d0b44670fc5e2a.scope: Deactivated successfully. Sep 12 17:43:21.958840 containerd[1460]: time="2025-09-12T17:43:21.958736570Z" level=info msg="shim disconnected" id=13a18c780dd72baf7e7a4ece32b327f09ac7f14fa57d37df63d0b44670fc5e2a namespace=k8s.io Sep 12 17:43:21.958840 containerd[1460]: time="2025-09-12T17:43:21.958807955Z" level=warning msg="cleaning up after shim disconnected" id=13a18c780dd72baf7e7a4ece32b327f09ac7f14fa57d37df63d0b44670fc5e2a namespace=k8s.io Sep 12 17:43:21.958840 containerd[1460]: time="2025-09-12T17:43:21.958820129Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:43:22.208362 kubelet[2556]: E0912 17:43:22.208324 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:22.210876 containerd[1460]: time="2025-09-12T17:43:22.210811041Z" level=info msg="CreateContainer within sandbox \"b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:43:22.226739 containerd[1460]: time="2025-09-12T17:43:22.224288468Z" level=info msg="CreateContainer within sandbox \"b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"99e7a34e76d0798f86953911a24cbc8c6b505533f631f9c02bffdd4922514663\"" Sep 12 17:43:22.226739 containerd[1460]: time="2025-09-12T17:43:22.225258995Z" level=info msg="StartContainer for \"99e7a34e76d0798f86953911a24cbc8c6b505533f631f9c02bffdd4922514663\"" Sep 12 17:43:22.294248 systemd[1]: Started cri-containerd-99e7a34e76d0798f86953911a24cbc8c6b505533f631f9c02bffdd4922514663.scope - libcontainer container 99e7a34e76d0798f86953911a24cbc8c6b505533f631f9c02bffdd4922514663. Sep 12 17:43:22.329131 containerd[1460]: time="2025-09-12T17:43:22.329078591Z" level=info msg="StartContainer for \"99e7a34e76d0798f86953911a24cbc8c6b505533f631f9c02bffdd4922514663\" returns successfully" Sep 12 17:43:22.339225 systemd[1]: cri-containerd-99e7a34e76d0798f86953911a24cbc8c6b505533f631f9c02bffdd4922514663.scope: Deactivated successfully. Sep 12 17:43:22.383594 containerd[1460]: time="2025-09-12T17:43:22.383497613Z" level=info msg="shim disconnected" id=99e7a34e76d0798f86953911a24cbc8c6b505533f631f9c02bffdd4922514663 namespace=k8s.io Sep 12 17:43:22.383594 containerd[1460]: time="2025-09-12T17:43:22.383566443Z" level=warning msg="cleaning up after shim disconnected" id=99e7a34e76d0798f86953911a24cbc8c6b505533f631f9c02bffdd4922514663 namespace=k8s.io Sep 12 17:43:22.383594 containerd[1460]: time="2025-09-12T17:43:22.383575240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:43:22.838755 kubelet[2556]: E0912 17:43:22.838707 2556 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:43:23.212238 kubelet[2556]: E0912 17:43:23.212197 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:23.215773 containerd[1460]: time="2025-09-12T17:43:23.215720862Z" level=info msg="CreateContainer within sandbox \"b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:43:23.247760 containerd[1460]: time="2025-09-12T17:43:23.247712820Z" level=info msg="CreateContainer within sandbox \"b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"54303ee2983791eeba0e8515057d677fd0a3f2885b514af5ed40e54d6b116f70\"" Sep 12 17:43:23.248440 containerd[1460]: time="2025-09-12T17:43:23.248268571Z" level=info msg="StartContainer for \"54303ee2983791eeba0e8515057d677fd0a3f2885b514af5ed40e54d6b116f70\"" Sep 12 17:43:23.286025 systemd[1]: Started cri-containerd-54303ee2983791eeba0e8515057d677fd0a3f2885b514af5ed40e54d6b116f70.scope - libcontainer container 54303ee2983791eeba0e8515057d677fd0a3f2885b514af5ed40e54d6b116f70. Sep 12 17:43:23.323166 containerd[1460]: time="2025-09-12T17:43:23.323108480Z" level=info msg="StartContainer for \"54303ee2983791eeba0e8515057d677fd0a3f2885b514af5ed40e54d6b116f70\" returns successfully" Sep 12 17:43:23.324358 systemd[1]: cri-containerd-54303ee2983791eeba0e8515057d677fd0a3f2885b514af5ed40e54d6b116f70.scope: Deactivated successfully. Sep 12 17:43:23.354988 containerd[1460]: time="2025-09-12T17:43:23.354880481Z" level=info msg="shim disconnected" id=54303ee2983791eeba0e8515057d677fd0a3f2885b514af5ed40e54d6b116f70 namespace=k8s.io Sep 12 17:43:23.354988 containerd[1460]: time="2025-09-12T17:43:23.354956505Z" level=warning msg="cleaning up after shim disconnected" id=54303ee2983791eeba0e8515057d677fd0a3f2885b514af5ed40e54d6b116f70 namespace=k8s.io Sep 12 17:43:23.354988 containerd[1460]: time="2025-09-12T17:43:23.354966103Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:43:23.565648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54303ee2983791eeba0e8515057d677fd0a3f2885b514af5ed40e54d6b116f70-rootfs.mount: Deactivated successfully. Sep 12 17:43:24.216449 kubelet[2556]: E0912 17:43:24.216395 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:24.218153 containerd[1460]: time="2025-09-12T17:43:24.218115566Z" level=info msg="CreateContainer within sandbox \"b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:43:24.234766 containerd[1460]: time="2025-09-12T17:43:24.234688011Z" level=info msg="CreateContainer within sandbox \"b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"12b1c464c93f61f0c3035abad2d2970ba4db7488fa5608aaa5588ff89a2001b1\"" Sep 12 17:43:24.235320 containerd[1460]: time="2025-09-12T17:43:24.235274502Z" level=info msg="StartContainer for \"12b1c464c93f61f0c3035abad2d2970ba4db7488fa5608aaa5588ff89a2001b1\"" Sep 12 17:43:24.273864 systemd[1]: Started cri-containerd-12b1c464c93f61f0c3035abad2d2970ba4db7488fa5608aaa5588ff89a2001b1.scope - libcontainer container 12b1c464c93f61f0c3035abad2d2970ba4db7488fa5608aaa5588ff89a2001b1. Sep 12 17:43:24.305453 systemd[1]: cri-containerd-12b1c464c93f61f0c3035abad2d2970ba4db7488fa5608aaa5588ff89a2001b1.scope: Deactivated successfully. Sep 12 17:43:24.307355 containerd[1460]: time="2025-09-12T17:43:24.307303220Z" level=info msg="StartContainer for \"12b1c464c93f61f0c3035abad2d2970ba4db7488fa5608aaa5588ff89a2001b1\" returns successfully" Sep 12 17:43:24.335530 containerd[1460]: time="2025-09-12T17:43:24.335450535Z" level=info msg="shim disconnected" id=12b1c464c93f61f0c3035abad2d2970ba4db7488fa5608aaa5588ff89a2001b1 namespace=k8s.io Sep 12 17:43:24.335530 containerd[1460]: time="2025-09-12T17:43:24.335519375Z" level=warning msg="cleaning up after shim disconnected" id=12b1c464c93f61f0c3035abad2d2970ba4db7488fa5608aaa5588ff89a2001b1 namespace=k8s.io Sep 12 17:43:24.335530 containerd[1460]: time="2025-09-12T17:43:24.335534343Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:43:24.565614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12b1c464c93f61f0c3035abad2d2970ba4db7488fa5608aaa5588ff89a2001b1-rootfs.mount: Deactivated successfully. Sep 12 17:43:25.221297 kubelet[2556]: E0912 17:43:25.221235 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:25.223408 containerd[1460]: time="2025-09-12T17:43:25.223364434Z" level=info msg="CreateContainer within sandbox \"b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:43:25.243092 containerd[1460]: time="2025-09-12T17:43:25.243005118Z" level=info msg="CreateContainer within sandbox \"b0a9e3995a1bf233cdc9e51faed4c5e8a5b29ae9fedb4f7fd97255f971c94dec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5a77c8c5f8838dac741e3fd8cf9c05f199429a6e7337c8d05b0b1f05b730b0de\"" Sep 12 17:43:25.243661 containerd[1460]: time="2025-09-12T17:43:25.243576699Z" level=info msg="StartContainer for \"5a77c8c5f8838dac741e3fd8cf9c05f199429a6e7337c8d05b0b1f05b730b0de\"" Sep 12 17:43:25.278902 systemd[1]: Started cri-containerd-5a77c8c5f8838dac741e3fd8cf9c05f199429a6e7337c8d05b0b1f05b730b0de.scope - libcontainer container 5a77c8c5f8838dac741e3fd8cf9c05f199429a6e7337c8d05b0b1f05b730b0de. Sep 12 17:43:25.323728 containerd[1460]: time="2025-09-12T17:43:25.320988162Z" level=info msg="StartContainer for \"5a77c8c5f8838dac741e3fd8cf9c05f199429a6e7337c8d05b0b1f05b730b0de\" returns successfully" Sep 12 17:43:25.818747 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 17:43:25.822426 kubelet[2556]: I0912 17:43:25.822369 2556 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:43:25Z","lastTransitionTime":"2025-09-12T17:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:43:26.226061 kubelet[2556]: E0912 17:43:26.225992 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:27.733803 kubelet[2556]: E0912 17:43:27.733760 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:29.249881 systemd-networkd[1392]: lxc_health: Link UP Sep 12 17:43:29.262519 systemd-networkd[1392]: lxc_health: Gained carrier Sep 12 17:43:29.736751 kubelet[2556]: E0912 17:43:29.735992 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:29.759840 kubelet[2556]: I0912 17:43:29.759737 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kq2k2" podStartSLOduration=8.759687380999999 podStartE2EDuration="8.759687381s" podCreationTimestamp="2025-09-12 17:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:26.244172521 +0000 UTC m=+103.622152818" watchObservedRunningTime="2025-09-12 17:43:29.759687381 +0000 UTC m=+107.137667678" Sep 12 17:43:29.991280 systemd[1]: run-containerd-runc-k8s.io-5a77c8c5f8838dac741e3fd8cf9c05f199429a6e7337c8d05b0b1f05b730b0de-runc.6VRlbY.mount: Deactivated successfully. Sep 12 17:43:30.242662 kubelet[2556]: E0912 17:43:30.242480 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:30.880244 systemd-networkd[1392]: lxc_health: Gained IPv6LL Sep 12 17:43:31.245478 kubelet[2556]: E0912 17:43:31.245433 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:43:34.302296 sshd[4420]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:34.307075 systemd[1]: sshd@31-10.0.0.128:22-10.0.0.1:52298.service: Deactivated successfully. Sep 12 17:43:34.309066 systemd[1]: session-32.scope: Deactivated successfully. Sep 12 17:43:34.309768 systemd-logind[1444]: Session 32 logged out. Waiting for processes to exit. Sep 12 17:43:34.310806 systemd-logind[1444]: Removed session 32.