Aug 13 07:03:21.950987 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:03:21.951016 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:03:21.951028 kernel: BIOS-provided physical RAM map: Aug 13 07:03:21.951035 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 07:03:21.951041 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 07:03:21.951047 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 07:03:21.951055 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 07:03:21.951062 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 07:03:21.951068 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 13 07:03:21.951075 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 13 07:03:21.951086 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Aug 13 07:03:21.951093 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Aug 13 07:03:21.951102 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Aug 13 07:03:21.951109 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Aug 13 07:03:21.951121 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 13 07:03:21.951128 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 07:03:21.951137 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 13 07:03:21.951144 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 13 07:03:21.951151 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 07:03:21.951158 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 07:03:21.951165 kernel: NX (Execute Disable) protection: active Aug 13 07:03:21.951172 kernel: APIC: Static calls initialized Aug 13 07:03:21.951179 kernel: efi: EFI v2.7 by EDK II Aug 13 07:03:21.951186 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Aug 13 07:03:21.951193 kernel: SMBIOS 2.8 present. Aug 13 07:03:21.951199 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Aug 13 07:03:21.951206 kernel: Hypervisor detected: KVM Aug 13 07:03:21.951215 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:03:21.951222 kernel: kvm-clock: using sched offset of 5326524516 cycles Aug 13 07:03:21.951230 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:03:21.951237 kernel: tsc: Detected 2794.750 MHz processor Aug 13 07:03:21.951244 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:03:21.951252 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:03:21.951259 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Aug 13 07:03:21.951266 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 07:03:21.951273 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:03:21.951283 kernel: Using GB pages for direct mapping Aug 13 07:03:21.951290 kernel: Secure boot disabled Aug 13 07:03:21.951297 kernel: ACPI: Early table checksum verification disabled Aug 13 07:03:21.951304 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 13 07:03:21.951334 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 13 07:03:21.951347 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:03:21.951355 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:03:21.951686 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 13 07:03:21.951703 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:03:21.951728 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:03:21.951748 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:03:21.951764 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:03:21.951772 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 07:03:21.951780 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Aug 13 07:03:21.951791 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Aug 13 07:03:21.951799 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 13 07:03:21.951806 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Aug 13 07:03:21.951814 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Aug 13 07:03:21.951821 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Aug 13 07:03:21.951828 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Aug 13 07:03:21.951836 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Aug 13 07:03:21.951856 kernel: No NUMA configuration found Aug 13 07:03:21.951865 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Aug 13 07:03:21.951873 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Aug 13 07:03:21.951883 kernel: Zone ranges: Aug 13 07:03:21.951891 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:03:21.951899 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Aug 13 07:03:21.951906 kernel: Normal empty Aug 13 07:03:21.951913 kernel: Movable zone start for each node Aug 13 07:03:21.951921 kernel: Early memory node ranges Aug 13 07:03:21.951928 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 07:03:21.951935 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 13 07:03:21.951943 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 13 07:03:21.951953 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Aug 13 07:03:21.951960 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Aug 13 07:03:21.951976 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Aug 13 07:03:21.951987 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Aug 13 07:03:21.951994 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:03:21.952002 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 07:03:21.952009 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 13 07:03:21.952019 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:03:21.952026 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Aug 13 07:03:21.952037 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 07:03:21.952045 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Aug 13 07:03:21.952052 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:03:21.952059 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:03:21.952067 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:03:21.952074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:03:21.952082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:03:21.952089 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:03:21.952100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:03:21.952122 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:03:21.952130 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:03:21.952146 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:03:21.952162 kernel: TSC deadline timer available Aug 13 07:03:21.952178 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 07:03:21.952187 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:03:21.952194 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 07:03:21.952202 kernel: kvm-guest: setup PV sched yield Aug 13 07:03:21.952209 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 07:03:21.952216 kernel: Booting paravirtualized kernel on KVM Aug 13 07:03:21.952227 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:03:21.952239 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 13 07:03:21.952248 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Aug 13 07:03:21.952255 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Aug 13 07:03:21.952263 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 07:03:21.952270 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:03:21.952277 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:03:21.952292 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:03:21.952306 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:03:21.952313 kernel: random: crng init done Aug 13 07:03:21.952321 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:03:21.952328 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:03:21.952341 kernel: Fallback order for Node 0: 0 Aug 13 07:03:21.952354 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Aug 13 07:03:21.952361 kernel: Policy zone: DMA32 Aug 13 07:03:21.952375 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:03:21.952388 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 171124K reserved, 0K cma-reserved) Aug 13 07:03:21.952400 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 07:03:21.952407 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:03:21.952415 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:03:21.952422 kernel: Dynamic Preempt: voluntary Aug 13 07:03:21.952438 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:03:21.952466 kernel: rcu: RCU event tracing is enabled. Aug 13 07:03:21.955812 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 07:03:21.955834 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:03:21.955865 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:03:21.955874 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:03:21.955882 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:03:21.955890 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 07:03:21.955903 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 07:03:21.955925 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:03:21.955944 kernel: Console: colour dummy device 80x25 Aug 13 07:03:21.955957 kernel: printk: console [ttyS0] enabled Aug 13 07:03:21.958987 kernel: ACPI: Core revision 20230628 Aug 13 07:03:21.959371 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:03:21.959379 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:03:21.959387 kernel: x2apic enabled Aug 13 07:03:21.959395 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:03:21.959403 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 07:03:21.959411 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 07:03:21.959418 kernel: kvm-guest: setup PV IPIs Aug 13 07:03:21.959426 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:03:21.959434 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 07:03:21.959444 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 07:03:21.959452 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 07:03:21.959460 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 07:03:21.959468 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 07:03:21.959475 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:03:21.959483 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:03:21.959491 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:03:21.959499 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 07:03:21.959509 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 07:03:21.959517 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:03:21.959525 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:03:21.959542 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 07:03:21.959557 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 07:03:21.959565 kernel: x86/bugs: return thunk changed Aug 13 07:03:21.959579 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 07:03:21.959595 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:03:21.959603 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:03:21.959614 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:03:21.959622 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:03:21.959630 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 07:03:21.959639 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:03:21.959647 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:03:21.959655 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:03:21.959663 kernel: landlock: Up and running. Aug 13 07:03:21.959670 kernel: SELinux: Initializing. Aug 13 07:03:21.959678 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:03:21.959692 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:03:21.959700 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 07:03:21.959713 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:03:21.959729 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:03:21.959747 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:03:21.959761 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 07:03:21.959769 kernel: ... version: 0 Aug 13 07:03:21.959777 kernel: ... bit width: 48 Aug 13 07:03:21.959800 kernel: ... generic registers: 6 Aug 13 07:03:21.959819 kernel: ... value mask: 0000ffffffffffff Aug 13 07:03:21.959856 kernel: ... max period: 00007fffffffffff Aug 13 07:03:21.959872 kernel: ... fixed-purpose events: 0 Aug 13 07:03:21.959886 kernel: ... event mask: 000000000000003f Aug 13 07:03:21.959899 kernel: signal: max sigframe size: 1776 Aug 13 07:03:21.959907 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:03:21.959919 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:03:21.959929 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:03:21.959936 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:03:21.959948 kernel: .... node #0, CPUs: #1 #2 #3 Aug 13 07:03:21.959956 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 07:03:21.959964 kernel: smpboot: Max logical packages: 1 Aug 13 07:03:21.959979 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 07:03:21.959998 kernel: devtmpfs: initialized Aug 13 07:03:21.960012 kernel: x86/mm: Memory block size: 128MB Aug 13 07:03:21.960019 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 13 07:03:21.961324 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 13 07:03:21.961334 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Aug 13 07:03:21.961346 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 13 07:03:21.961360 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 13 07:03:21.961368 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:03:21.961376 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 07:03:21.961397 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:03:21.961407 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:03:21.961426 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:03:21.961436 kernel: audit: type=2000 audit(1755068600.571:1): state=initialized audit_enabled=0 res=1 Aug 13 07:03:21.961444 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:03:21.961469 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:03:21.962408 kernel: cpuidle: using governor menu Aug 13 07:03:21.962416 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:03:21.962436 kernel: dca service started, version 1.12.1 Aug 13 07:03:21.962446 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 07:03:21.962453 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 07:03:21.962462 kernel: PCI: Using configuration type 1 for base access Aug 13 07:03:21.962470 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:03:21.962478 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:03:21.962490 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:03:21.962497 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:03:21.962517 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:03:21.962525 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:03:21.962533 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:03:21.962541 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:03:21.962554 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:03:21.962567 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:03:21.962575 kernel: ACPI: Interpreter enabled Aug 13 07:03:21.962593 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:03:21.962601 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:03:21.962610 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:03:21.962618 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:03:21.962626 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 07:03:21.962634 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:03:21.964264 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:03:21.968645 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 07:03:21.968857 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 07:03:21.968870 kernel: PCI host bridge to bus 0000:00 Aug 13 07:03:21.969097 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:03:21.969235 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:03:21.969366 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:03:21.969512 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 07:03:21.969679 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:03:21.969818 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Aug 13 07:03:21.969985 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:03:21.970169 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 07:03:21.970487 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 07:03:21.970632 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Aug 13 07:03:21.970773 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Aug 13 07:03:21.971004 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 13 07:03:21.971523 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Aug 13 07:03:21.971667 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:03:21.973984 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:03:21.974161 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Aug 13 07:03:21.974295 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Aug 13 07:03:21.974530 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Aug 13 07:03:21.974795 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:03:21.974981 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Aug 13 07:03:21.975332 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Aug 13 07:03:21.975539 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Aug 13 07:03:21.975774 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:03:21.975940 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Aug 13 07:03:21.978624 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Aug 13 07:03:21.978777 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Aug 13 07:03:21.978933 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Aug 13 07:03:21.979091 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 07:03:21.979235 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 07:03:21.979387 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 07:03:21.979559 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Aug 13 07:03:21.979706 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Aug 13 07:03:21.981787 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 07:03:21.981937 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Aug 13 07:03:21.981949 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:03:21.981957 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:03:21.981965 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:03:21.981973 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:03:21.981981 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 07:03:21.981995 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 07:03:21.982003 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 07:03:21.982011 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 07:03:21.982018 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 07:03:21.982026 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 07:03:21.982034 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 07:03:21.982042 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 07:03:21.982050 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 07:03:21.982058 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 07:03:21.982070 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 07:03:21.982078 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 07:03:21.982086 kernel: iommu: Default domain type: Translated Aug 13 07:03:21.982097 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:03:21.982105 kernel: efivars: Registered efivars operations Aug 13 07:03:21.982113 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:03:21.982121 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:03:21.982129 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 13 07:03:21.982137 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Aug 13 07:03:21.982148 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Aug 13 07:03:21.982156 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Aug 13 07:03:21.982301 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 07:03:21.982452 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 07:03:21.982595 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:03:21.982608 kernel: vgaarb: loaded Aug 13 07:03:21.982616 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:03:21.982624 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:03:21.982632 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:03:21.982651 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:03:21.982659 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:03:21.982667 kernel: pnp: PnP ACPI init Aug 13 07:03:21.982871 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 07:03:21.982885 kernel: pnp: PnP ACPI: found 6 devices Aug 13 07:03:21.982894 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:03:21.982902 kernel: NET: Registered PF_INET protocol family Aug 13 07:03:21.982910 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:03:21.982923 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 07:03:21.982931 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:03:21.982940 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:03:21.982948 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 07:03:21.982956 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 07:03:21.982964 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:03:21.982972 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:03:21.982980 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:03:21.982994 kernel: NET: Registered PF_XDP protocol family Aug 13 07:03:21.983137 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Aug 13 07:03:21.983282 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Aug 13 07:03:21.983401 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:03:21.983545 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:03:21.983673 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:03:21.983787 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 07:03:21.983920 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 07:03:21.984099 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Aug 13 07:03:21.984111 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:03:21.984120 kernel: Initialise system trusted keyrings Aug 13 07:03:21.984128 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 07:03:21.984136 kernel: Key type asymmetric registered Aug 13 07:03:21.984144 kernel: Asymmetric key parser 'x509' registered Aug 13 07:03:21.984152 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:03:21.984161 kernel: io scheduler mq-deadline registered Aug 13 07:03:21.984169 kernel: io scheduler kyber registered Aug 13 07:03:21.984181 kernel: io scheduler bfq registered Aug 13 07:03:21.984190 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:03:21.984198 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 07:03:21.984207 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 07:03:21.984215 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 07:03:21.984232 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:03:21.984240 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:03:21.984249 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:03:21.984261 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:03:21.984274 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:03:21.984455 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 07:03:21.984469 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:03:21.984597 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 07:03:21.984720 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T07:03:21 UTC (1755068601) Aug 13 07:03:21.984853 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 07:03:21.984864 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 07:03:21.984872 kernel: efifb: probing for efifb Aug 13 07:03:21.984885 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Aug 13 07:03:21.984893 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Aug 13 07:03:21.984902 kernel: efifb: scrolling: redraw Aug 13 07:03:21.984910 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Aug 13 07:03:21.984918 kernel: Console: switching to colour frame buffer device 100x37 Aug 13 07:03:21.984926 kernel: fb0: EFI VGA frame buffer device Aug 13 07:03:21.984962 kernel: pstore: Using crash dump compression: deflate Aug 13 07:03:21.984974 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:03:21.984982 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:03:21.984996 kernel: Segment Routing with IPv6 Aug 13 07:03:21.985018 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:03:21.985035 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:03:21.985044 kernel: Key type dns_resolver registered Aug 13 07:03:21.985053 kernel: IPI shorthand broadcast: enabled Aug 13 07:03:21.985061 kernel: sched_clock: Marking stable (994002148, 111042883)->(1181273576, -76228545) Aug 13 07:03:21.985070 kernel: registered taskstats version 1 Aug 13 07:03:21.985078 kernel: Loading compiled-in X.509 certificates Aug 13 07:03:21.985086 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:03:21.985098 kernel: Key type .fscrypt registered Aug 13 07:03:21.985109 kernel: Key type fscrypt-provisioning registered Aug 13 07:03:21.985118 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:03:21.985126 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:03:21.985135 kernel: ima: No architecture policies found Aug 13 07:03:21.985143 kernel: clk: Disabling unused clocks Aug 13 07:03:21.985160 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:03:21.985169 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:03:21.985186 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:03:21.985204 kernel: Run /init as init process Aug 13 07:03:21.985212 kernel: with arguments: Aug 13 07:03:21.985220 kernel: /init Aug 13 07:03:21.985233 kernel: with environment: Aug 13 07:03:21.985242 kernel: HOME=/ Aug 13 07:03:21.985255 kernel: TERM=linux Aug 13 07:03:21.985269 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:03:21.985280 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:03:21.985297 systemd[1]: Detected virtualization kvm. Aug 13 07:03:21.985306 systemd[1]: Detected architecture x86-64. Aug 13 07:03:21.985320 systemd[1]: Running in initrd. Aug 13 07:03:21.985329 systemd[1]: No hostname configured, using default hostname. Aug 13 07:03:21.985338 systemd[1]: Hostname set to . Aug 13 07:03:21.985362 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:03:21.985371 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:03:21.985388 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:03:21.985408 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:03:21.985424 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:03:21.985433 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:03:21.985454 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:03:21.985475 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:03:21.985499 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:03:21.985508 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:03:21.985517 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:03:21.985531 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:03:21.985541 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:03:21.985557 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:03:21.985578 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:03:21.985596 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:03:21.985608 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:03:21.985621 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:03:21.985633 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:03:21.985642 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:03:21.985651 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:03:21.985659 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:03:21.985668 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:03:21.985683 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:03:21.985692 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:03:21.985700 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:03:21.985709 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:03:21.985718 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:03:21.985727 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:03:21.985744 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:03:21.985754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:03:21.985773 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:03:21.985794 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:03:21.985822 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:03:21.985896 systemd-journald[193]: Collecting audit messages is disabled. Aug 13 07:03:21.985937 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:03:21.985956 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:03:21.985966 systemd-journald[193]: Journal started Aug 13 07:03:21.985989 systemd-journald[193]: Runtime Journal (/run/log/journal/a106a4515fc3456f828c89cab682a0c5) is 6.0M, max 48.3M, 42.2M free. Aug 13 07:03:21.969194 systemd-modules-load[194]: Inserted module 'overlay' Aug 13 07:03:21.995927 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:03:22.001975 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:03:22.008600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:03:22.058622 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:03:22.062191 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:03:22.071292 kernel: Bridge firewalling registered Aug 13 07:03:22.071441 systemd-modules-load[194]: Inserted module 'br_netfilter' Aug 13 07:03:22.071493 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:03:22.076002 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:03:22.090533 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:03:22.111013 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:03:22.123687 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:03:22.124372 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:03:22.132121 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:03:22.140705 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:03:22.153986 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:03:22.165050 systemd-resolved[222]: Positive Trust Anchors: Aug 13 07:03:22.165075 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:03:22.165107 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:03:22.168235 systemd-resolved[222]: Defaulting to hostname 'linux'. Aug 13 07:03:22.169669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:03:22.177080 dracut-cmdline[229]: dracut-dracut-053 Aug 13 07:03:22.177061 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:03:22.180526 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:03:22.269886 kernel: SCSI subsystem initialized Aug 13 07:03:22.278952 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:03:22.289872 kernel: iscsi: registered transport (tcp) Aug 13 07:03:22.311870 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:03:22.311915 kernel: QLogic iSCSI HBA Driver Aug 13 07:03:22.372125 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:03:22.393995 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:03:22.423753 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:03:22.423806 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:03:22.423823 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:03:22.466866 kernel: raid6: avx2x4 gen() 27427 MB/s Aug 13 07:03:22.483865 kernel: raid6: avx2x2 gen() 23592 MB/s Aug 13 07:03:22.501104 kernel: raid6: avx2x1 gen() 22491 MB/s Aug 13 07:03:22.501127 kernel: raid6: using algorithm avx2x4 gen() 27427 MB/s Aug 13 07:03:22.519086 kernel: raid6: .... xor() 7095 MB/s, rmw enabled Aug 13 07:03:22.519106 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:03:22.540873 kernel: xor: automatically using best checksumming function avx Aug 13 07:03:22.708882 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:03:22.723673 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:03:22.737316 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:03:22.751180 systemd-udevd[413]: Using default interface naming scheme 'v255'. Aug 13 07:03:22.757092 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:03:22.762166 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:03:22.780553 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Aug 13 07:03:22.817826 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:03:22.823037 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:03:22.896358 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:03:22.906056 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:03:22.925885 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:03:22.930293 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:03:22.932828 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:03:22.935493 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:03:22.950272 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:03:22.949088 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:03:22.958881 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 13 07:03:22.962684 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:03:22.972286 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 07:03:22.984905 kernel: libata version 3.00 loaded. Aug 13 07:03:22.985339 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:03:22.986618 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:03:22.987581 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:03:22.994135 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:03:22.994169 kernel: GPT:9289727 != 19775487 Aug 13 07:03:22.994186 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:03:22.993858 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:03:23.001254 kernel: GPT:9289727 != 19775487 Aug 13 07:03:23.001273 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:03:23.001283 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:03:23.001294 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:03:23.001315 kernel: AES CTR mode by8 optimization enabled Aug 13 07:03:22.994115 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:03:23.001002 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:03:23.009058 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 07:03:23.012209 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 07:03:23.011636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:03:23.018196 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 07:03:23.018437 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 07:03:23.019059 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:03:23.019245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:03:23.037865 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Aug 13 07:03:23.040227 kernel: scsi host0: ahci Aug 13 07:03:23.043446 kernel: scsi host1: ahci Aug 13 07:03:23.043645 kernel: scsi host2: ahci Aug 13 07:03:23.045869 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (467) Aug 13 07:03:23.049773 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:03:23.051350 kernel: scsi host3: ahci Aug 13 07:03:23.052885 kernel: scsi host4: ahci Aug 13 07:03:23.055585 kernel: scsi host5: ahci Aug 13 07:03:23.055868 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Aug 13 07:03:23.055882 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Aug 13 07:03:23.057087 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Aug 13 07:03:23.057109 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Aug 13 07:03:23.057863 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Aug 13 07:03:23.058281 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:03:23.061088 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Aug 13 07:03:23.070587 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:03:23.076617 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:03:23.077824 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:03:23.088998 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:03:23.091017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:03:23.097645 disk-uuid[564]: Primary Header is updated. Aug 13 07:03:23.097645 disk-uuid[564]: Secondary Entries is updated. Aug 13 07:03:23.097645 disk-uuid[564]: Secondary Header is updated. Aug 13 07:03:23.101870 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:03:23.105871 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:03:23.114110 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:03:23.126129 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:03:23.151950 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:03:23.371000 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 07:03:23.371050 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 07:03:23.371069 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 07:03:23.371867 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 07:03:23.372877 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 07:03:23.373867 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 07:03:23.374997 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 07:03:23.375009 kernel: ata3.00: applying bridge limits Aug 13 07:03:23.375864 kernel: ata3.00: configured for UDMA/100 Aug 13 07:03:23.376868 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 07:03:23.418344 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 07:03:23.418575 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:03:23.432872 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 07:03:24.106866 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:03:24.107136 disk-uuid[565]: The operation has completed successfully. Aug 13 07:03:24.138855 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:03:24.138989 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:03:24.161978 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:03:24.165470 sh[595]: Success Aug 13 07:03:24.177905 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 07:03:24.218115 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:03:24.238749 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:03:24.243427 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:03:24.255082 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:03:24.255114 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:03:24.255126 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:03:24.256060 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:03:24.257338 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:03:24.261266 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:03:24.262810 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:03:24.272977 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:03:24.275500 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:03:24.286508 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:03:24.286567 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:03:24.286579 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:03:24.289873 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:03:24.299835 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:03:24.301411 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:03:24.312994 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:03:24.319049 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:03:24.384048 ignition[688]: Ignition 2.19.0 Aug 13 07:03:24.384923 ignition[688]: Stage: fetch-offline Aug 13 07:03:24.384999 ignition[688]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:03:24.385012 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:03:24.385154 ignition[688]: parsed url from cmdline: "" Aug 13 07:03:24.385159 ignition[688]: no config URL provided Aug 13 07:03:24.385165 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:03:24.385175 ignition[688]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:03:24.385208 ignition[688]: op(1): [started] loading QEMU firmware config module Aug 13 07:03:24.385217 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 07:03:24.393662 ignition[688]: op(1): [finished] loading QEMU firmware config module Aug 13 07:03:24.408193 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:03:24.423198 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:03:24.435913 ignition[688]: parsing config with SHA512: 215e0d02857aa1d5ddadb4bb38417036aba60263fbb97015aba07e86b0de9a2f5a87b60715c3e83ebdef1f3e29f41a641b8cdf8d631c906599c458b0d41de35f Aug 13 07:03:24.440368 unknown[688]: fetched base config from "system" Aug 13 07:03:24.440944 ignition[688]: fetch-offline: fetch-offline passed Aug 13 07:03:24.440383 unknown[688]: fetched user config from "qemu" Aug 13 07:03:24.441034 ignition[688]: Ignition finished successfully Aug 13 07:03:24.443814 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:03:24.451783 systemd-networkd[784]: lo: Link UP Aug 13 07:03:24.451794 systemd-networkd[784]: lo: Gained carrier Aug 13 07:03:24.454775 systemd-networkd[784]: Enumeration completed Aug 13 07:03:24.454885 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:03:24.455530 systemd[1]: Reached target network.target - Network. Aug 13 07:03:24.455782 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 07:03:24.461027 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:03:24.461036 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:03:24.462063 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:03:24.466762 systemd-networkd[784]: eth0: Link UP Aug 13 07:03:24.466771 systemd-networkd[784]: eth0: Gained carrier Aug 13 07:03:24.466779 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:03:24.477929 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:03:24.480620 ignition[787]: Ignition 2.19.0 Aug 13 07:03:24.480633 ignition[787]: Stage: kargs Aug 13 07:03:24.480872 ignition[787]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:03:24.480885 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:03:24.481752 ignition[787]: kargs: kargs passed Aug 13 07:03:24.481802 ignition[787]: Ignition finished successfully Aug 13 07:03:24.484806 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:03:24.495050 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:03:24.512000 ignition[796]: Ignition 2.19.0 Aug 13 07:03:24.512012 ignition[796]: Stage: disks Aug 13 07:03:24.512192 ignition[796]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:03:24.512206 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:03:24.515956 ignition[796]: disks: disks passed Aug 13 07:03:24.516007 ignition[796]: Ignition finished successfully Aug 13 07:03:24.519344 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:03:24.519811 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:03:24.521421 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:03:24.523587 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:03:24.524067 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:03:24.524375 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:03:24.532078 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:03:24.549149 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:03:24.555927 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:03:24.562983 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:03:24.654865 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:03:24.655905 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:03:24.658179 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:03:24.671951 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:03:24.673861 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:03:24.674528 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:03:24.674565 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:03:24.682306 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Aug 13 07:03:24.682337 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:03:24.674587 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:03:24.685936 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:03:24.685950 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:03:24.687864 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:03:24.694874 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:03:24.699823 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:03:24.701867 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:03:24.766129 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:03:24.772691 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:03:24.777160 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:03:24.781376 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:03:24.872185 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:03:24.877063 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:03:24.879755 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:03:24.888856 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:03:24.906088 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:03:24.913256 ignition[929]: INFO : Ignition 2.19.0 Aug 13 07:03:24.913256 ignition[929]: INFO : Stage: mount Aug 13 07:03:24.914972 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:03:24.914972 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:03:24.917661 ignition[929]: INFO : mount: mount passed Aug 13 07:03:24.918386 ignition[929]: INFO : Ignition finished successfully Aug 13 07:03:24.921286 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:03:24.931932 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:03:25.254729 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:03:25.267989 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:03:25.274862 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Aug 13 07:03:25.274888 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:03:25.276667 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:03:25.276681 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:03:25.280109 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:03:25.281403 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:03:25.308692 ignition[958]: INFO : Ignition 2.19.0 Aug 13 07:03:25.308692 ignition[958]: INFO : Stage: files Aug 13 07:03:25.310839 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:03:25.310839 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:03:25.310839 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:03:25.314538 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:03:25.314538 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:03:25.314538 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:03:25.314538 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:03:25.320019 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:03:25.320019 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:03:25.320019 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:03:25.320019 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:03:25.320019 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:03:25.314972 unknown[958]: wrote ssh authorized keys file for user: core Aug 13 07:03:25.510163 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:03:25.693178 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:03:25.693178 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:03:25.697145 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 07:03:25.764037 systemd-networkd[784]: eth0: Gained IPv6LL Aug 13 07:03:26.005901 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Aug 13 07:03:26.317689 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:03:26.319570 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:03:26.321239 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:03:26.323168 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:03:26.324956 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:03:26.326575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:03:26.328277 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:03:26.330220 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:03:26.330220 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:03:26.333705 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:03:26.335563 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:03:26.337228 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:03:26.339719 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:03:26.342055 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:03:26.344206 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:03:26.821897 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Aug 13 07:03:28.296282 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:03:28.296282 ignition[958]: INFO : files: op(d): [started] processing unit "containerd.service" Aug 13 07:03:28.300551 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:03:28.300551 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:03:28.300551 ignition[958]: INFO : files: op(d): [finished] processing unit "containerd.service" Aug 13 07:03:28.300551 ignition[958]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Aug 13 07:03:28.300551 ignition[958]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:03:28.300551 ignition[958]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:03:28.300551 ignition[958]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Aug 13 07:03:28.300551 ignition[958]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Aug 13 07:03:28.300551 ignition[958]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:03:28.300551 ignition[958]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:03:28.300551 ignition[958]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Aug 13 07:03:28.300551 ignition[958]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 07:03:28.415678 ignition[958]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:03:28.422196 ignition[958]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:03:28.423982 ignition[958]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 07:03:28.423982 ignition[958]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:03:28.423982 ignition[958]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:03:28.423982 ignition[958]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:03:28.423982 ignition[958]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:03:28.423982 ignition[958]: INFO : files: files passed Aug 13 07:03:28.423982 ignition[958]: INFO : Ignition finished successfully Aug 13 07:03:28.425921 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:03:28.435103 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:03:28.437073 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:03:28.439012 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:03:28.439141 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:03:28.448008 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 07:03:28.451236 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:03:28.451236 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:03:28.455663 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:03:28.453761 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:03:28.456199 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:03:28.468995 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:03:28.494520 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:03:28.494645 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:03:28.495422 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:03:28.498400 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:03:28.498780 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:03:28.499583 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:03:28.528805 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:03:28.536101 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:03:28.546990 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:03:28.549280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:03:28.551625 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:03:28.553446 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:03:28.554535 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:03:28.557218 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:03:28.559382 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:03:28.561164 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:03:28.563180 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:03:28.565343 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:03:28.567406 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:03:28.569304 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:03:28.571595 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:03:28.573541 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:03:28.575408 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:03:28.576894 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:03:28.577827 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:03:28.579959 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:03:28.581964 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:03:28.584162 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:03:28.585094 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:03:28.587474 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:03:28.588426 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:03:28.590500 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:03:28.591522 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:03:28.593751 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:03:28.595363 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:03:28.596396 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:03:28.598946 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:03:28.600624 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:03:28.602375 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:03:28.603219 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:03:28.605055 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:03:28.605915 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:03:28.607795 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:03:28.608908 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:03:28.611254 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:03:28.612191 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:03:28.623005 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:03:28.624769 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:03:28.625739 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:03:28.628808 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:03:28.630487 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:03:28.631529 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:03:28.634141 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:03:28.635207 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:03:28.637445 ignition[1013]: INFO : Ignition 2.19.0 Aug 13 07:03:28.637445 ignition[1013]: INFO : Stage: umount Aug 13 07:03:28.639221 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:03:28.639221 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:03:28.639221 ignition[1013]: INFO : umount: umount passed Aug 13 07:03:28.639221 ignition[1013]: INFO : Ignition finished successfully Aug 13 07:03:28.641230 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:03:28.641354 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:03:28.642975 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:03:28.643088 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:03:28.645688 systemd[1]: Stopped target network.target - Network. Aug 13 07:03:28.647419 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:03:28.647477 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:03:28.649674 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:03:28.649723 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:03:28.651570 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:03:28.651619 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:03:28.653488 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:03:28.653538 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:03:28.655550 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:03:28.657458 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:03:28.658884 systemd-networkd[784]: eth0: DHCPv6 lease lost Aug 13 07:03:28.660457 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:03:28.661017 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:03:28.661156 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:03:28.662586 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:03:28.662707 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:03:28.665181 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:03:28.665239 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:03:28.667058 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:03:28.667113 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:03:28.675945 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:03:28.676860 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:03:28.676919 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:03:28.679043 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:03:28.681248 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:03:28.681374 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:03:28.696237 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:03:28.696308 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:03:28.698273 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:03:28.698325 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:03:28.700335 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:03:28.700395 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:03:28.702903 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:03:28.703082 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:03:28.705156 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:03:28.705272 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:03:28.707943 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:03:28.708007 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:03:28.709132 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:03:28.709175 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:03:28.710938 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:03:28.710992 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:03:28.713382 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:03:28.713443 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:03:28.715237 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:03:28.715291 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:03:28.727984 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:03:28.729066 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:03:28.729126 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:03:28.731392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:03:28.731447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:03:28.737025 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:03:28.737155 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:03:28.739406 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:03:28.742095 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:03:28.761693 systemd[1]: Switching root. Aug 13 07:03:28.799246 systemd-journald[193]: Journal stopped Aug 13 07:03:30.186330 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Aug 13 07:03:30.186459 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:03:30.186476 kernel: SELinux: policy capability open_perms=1 Aug 13 07:03:30.186502 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:03:30.186514 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:03:30.186528 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:03:30.186539 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:03:30.186556 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:03:30.186570 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:03:30.186588 kernel: audit: type=1403 audit(1755068609.199:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:03:30.186601 systemd[1]: Successfully loaded SELinux policy in 41.546ms. Aug 13 07:03:30.186632 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.668ms. Aug 13 07:03:30.186647 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:03:30.186662 systemd[1]: Detected virtualization kvm. Aug 13 07:03:30.186675 systemd[1]: Detected architecture x86-64. Aug 13 07:03:30.186690 systemd[1]: Detected first boot. Aug 13 07:03:30.186704 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:03:30.186720 zram_generator::config[1079]: No configuration found. Aug 13 07:03:30.186736 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:03:30.186748 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:03:30.186760 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:03:30.186776 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:03:30.186793 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:03:30.186808 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:03:30.186823 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:03:30.186836 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:03:30.186861 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:03:30.186875 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:03:30.186887 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:03:30.186902 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:03:30.186917 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:03:30.186931 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:03:30.186949 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:03:30.186962 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:03:30.186975 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:03:30.186987 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:03:30.186999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:03:30.187014 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:03:30.187030 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:03:30.187047 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:03:30.187059 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:03:30.187075 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:03:30.187087 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:03:30.187099 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:03:30.187112 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:03:30.187124 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:03:30.187137 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:03:30.187149 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:03:30.187161 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:03:30.187174 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:03:30.187189 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:03:30.187204 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:03:30.187216 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:03:30.187233 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:03:30.187250 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:03:30.187262 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:03:30.187274 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:03:30.187287 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:03:30.187304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:03:30.187322 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:03:30.187336 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:03:30.187358 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:03:30.187370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:03:30.187383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:03:30.187395 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:03:30.187407 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:03:30.187420 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:03:30.187435 kernel: fuse: init (API version 7.39) Aug 13 07:03:30.187447 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 07:03:30.187462 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 07:03:30.187474 kernel: loop: module loaded Aug 13 07:03:30.187487 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:03:30.187499 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:03:30.187511 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:03:30.187523 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:03:30.187556 systemd-journald[1176]: Collecting audit messages is disabled. Aug 13 07:03:30.187589 systemd-journald[1176]: Journal started Aug 13 07:03:30.187611 systemd-journald[1176]: Runtime Journal (/run/log/journal/a106a4515fc3456f828c89cab682a0c5) is 6.0M, max 48.3M, 42.2M free. Aug 13 07:03:30.193582 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:03:30.193678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:03:30.195920 kernel: ACPI: bus type drm_connector registered Aug 13 07:03:30.198793 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:03:30.200401 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:03:30.202002 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:03:30.203454 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:03:30.204695 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:03:30.205987 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:03:30.207325 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:03:30.208770 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:03:30.210429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:03:30.212178 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:03:30.212432 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:03:30.214273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:03:30.214509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:03:30.216096 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:03:30.216320 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:03:30.217801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:03:30.218039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:03:30.219663 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:03:30.219891 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:03:30.221274 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:03:30.221534 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:03:30.223161 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:03:30.224886 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:03:30.226489 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:03:30.243096 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:03:30.253018 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:03:30.255700 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:03:30.256819 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:03:30.261168 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:03:30.265981 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:03:30.268003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:03:30.276184 systemd-journald[1176]: Time spent on flushing to /var/log/journal/a106a4515fc3456f828c89cab682a0c5 is 29.429ms for 981 entries. Aug 13 07:03:30.276184 systemd-journald[1176]: System Journal (/var/log/journal/a106a4515fc3456f828c89cab682a0c5) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:03:30.320640 systemd-journald[1176]: Received client request to flush runtime journal. Aug 13 07:03:30.270167 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:03:30.272929 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:03:30.276975 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:03:30.281814 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:03:30.284805 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:03:30.286118 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:03:30.296128 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:03:30.312038 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:03:30.314463 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:03:30.316393 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:03:30.317776 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:03:30.322625 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Aug 13 07:03:30.322645 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Aug 13 07:03:30.325367 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:03:30.329735 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:03:30.337186 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:03:30.338618 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 07:03:30.365434 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:03:30.375090 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:03:30.392308 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Aug 13 07:03:30.392330 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Aug 13 07:03:30.398515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:03:31.134775 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:03:31.149076 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:03:31.185019 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Aug 13 07:03:31.210687 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:03:31.225173 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:03:31.239320 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:03:31.265901 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1250) Aug 13 07:03:31.290127 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:03:31.295628 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Aug 13 07:03:31.380245 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:03:31.405617 systemd-networkd[1248]: lo: Link UP Aug 13 07:03:31.406075 systemd-networkd[1248]: lo: Gained carrier Aug 13 07:03:31.410406 systemd-networkd[1248]: Enumeration completed Aug 13 07:03:31.411004 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:03:31.412408 systemd-networkd[1248]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:03:31.412470 systemd-networkd[1248]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:03:31.413312 systemd-networkd[1248]: eth0: Link UP Aug 13 07:03:31.413412 systemd-networkd[1248]: eth0: Gained carrier Aug 13 07:03:31.413478 systemd-networkd[1248]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:03:31.439869 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 07:03:31.515164 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:03:31.523320 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 13 07:03:31.528036 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 07:03:31.528208 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 07:03:31.528404 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 07:03:31.534169 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:03:31.538021 systemd-networkd[1248]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:03:31.540522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:03:31.545858 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 07:03:31.551129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:03:31.551473 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:03:31.560062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:03:31.561872 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:03:31.641534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:03:31.672290 kernel: kvm_amd: TSC scaling supported Aug 13 07:03:31.672336 kernel: kvm_amd: Nested Virtualization enabled Aug 13 07:03:31.672376 kernel: kvm_amd: Nested Paging enabled Aug 13 07:03:31.672389 kernel: kvm_amd: LBR virtualization supported Aug 13 07:03:31.673415 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 13 07:03:31.673434 kernel: kvm_amd: Virtual GIF supported Aug 13 07:03:31.692863 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:03:31.724460 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:03:31.741971 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:03:31.752826 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:03:31.793111 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:03:31.794562 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:03:31.806136 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:03:31.811868 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:03:31.848299 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:03:31.849718 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:03:31.850964 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:03:31.850991 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:03:31.852017 systemd[1]: Reached target machines.target - Containers. Aug 13 07:03:31.854119 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:03:31.868985 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:03:31.872014 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:03:31.873257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:03:31.874549 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:03:31.878122 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:03:31.881632 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:03:31.883941 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:03:31.894067 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:03:31.899886 kernel: loop0: detected capacity change from 0 to 140768 Aug 13 07:03:31.909803 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:03:31.910816 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:03:31.922112 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:03:31.949872 kernel: loop1: detected capacity change from 0 to 142488 Aug 13 07:03:31.978590 kernel: loop2: detected capacity change from 0 to 221472 Aug 13 07:03:32.014864 kernel: loop3: detected capacity change from 0 to 140768 Aug 13 07:03:32.031865 kernel: loop4: detected capacity change from 0 to 142488 Aug 13 07:03:32.040865 kernel: loop5: detected capacity change from 0 to 221472 Aug 13 07:03:32.046989 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 07:03:32.047711 (sd-merge)[1313]: Merged extensions into '/usr'. Aug 13 07:03:32.158490 systemd[1]: Reloading requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:03:32.158520 systemd[1]: Reloading... Aug 13 07:03:32.238887 zram_generator::config[1338]: No configuration found. Aug 13 07:03:32.328678 ldconfig[1297]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:03:32.388632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:03:32.457897 systemd[1]: Reloading finished in 298 ms. Aug 13 07:03:32.482151 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:03:32.483970 systemd-networkd[1248]: eth0: Gained IPv6LL Aug 13 07:03:32.483973 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:03:32.489315 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:03:32.512139 systemd[1]: Starting ensure-sysext.service... Aug 13 07:03:32.514674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:03:32.520015 systemd[1]: Reloading requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:03:32.520037 systemd[1]: Reloading... Aug 13 07:03:32.644019 zram_generator::config[1410]: No configuration found. Aug 13 07:03:32.663403 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:03:32.663806 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:03:32.664876 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:03:32.665196 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Aug 13 07:03:32.665290 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Aug 13 07:03:32.669239 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:03:32.669253 systemd-tmpfiles[1388]: Skipping /boot Aug 13 07:03:32.681628 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:03:32.681646 systemd-tmpfiles[1388]: Skipping /boot Aug 13 07:03:32.777781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:03:32.859376 systemd[1]: Reloading finished in 338 ms. Aug 13 07:03:32.883697 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:03:32.898059 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:03:32.901864 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:03:32.905201 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:03:32.910687 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:03:32.915179 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:03:32.922279 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:03:32.922582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:03:32.924994 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:03:32.929637 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:03:32.934118 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:03:32.937016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:03:32.937225 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:03:32.938722 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:03:32.939382 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:03:32.942030 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:03:32.942640 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:03:32.947644 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:03:32.950193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:03:32.950556 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:03:32.960323 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:03:32.960670 augenrules[1491]: No rules Aug 13 07:03:32.960700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:03:32.964632 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:03:32.966712 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:03:32.970116 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:03:33.010954 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:03:33.012978 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:03:33.020218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:03:33.020495 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:03:33.032092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:03:33.034672 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:03:33.039053 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:03:33.042266 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:03:33.043499 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:03:33.043753 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:03:33.044036 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:03:33.047789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:03:33.048123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:03:33.049995 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:03:33.050284 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:03:33.052025 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:03:33.052248 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:03:33.053894 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:03:33.054115 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:03:33.060525 systemd[1]: Finished ensure-sysext.service. Aug 13 07:03:33.065510 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:03:33.065566 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:03:33.069894 systemd-resolved[1465]: Positive Trust Anchors: Aug 13 07:03:33.069910 systemd-resolved[1465]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:03:33.069942 systemd-resolved[1465]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:03:33.073004 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:03:33.074285 systemd-resolved[1465]: Defaulting to hostname 'linux'. Aug 13 07:03:33.076360 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:03:33.077540 systemd[1]: Reached target network.target - Network. Aug 13 07:03:33.078455 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:03:33.079550 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:03:33.141511 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:03:33.142452 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 07:03:33.142505 systemd-timesyncd[1522]: Initial clock synchronization to Wed 2025-08-13 07:03:33.185549 UTC. Aug 13 07:03:33.143434 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:03:33.144707 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:03:33.145997 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:03:33.147246 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:03:33.148524 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:03:33.148551 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:03:33.149460 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:03:33.150701 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:03:33.151947 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:03:33.153182 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:03:33.155286 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:03:33.158875 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:03:33.161815 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:03:33.169061 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:03:33.170234 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:03:33.171227 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:03:33.172430 systemd[1]: System is tainted: cgroupsv1 Aug 13 07:03:33.172486 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:03:33.172510 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:03:33.174195 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:03:33.176490 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 07:03:33.178749 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:03:33.183960 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:03:33.186862 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:03:33.191061 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:03:33.194126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:03:33.201870 jq[1529]: false Aug 13 07:03:33.198170 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:03:33.204007 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:03:33.208890 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:03:33.212161 dbus-daemon[1528]: [system] SELinux support is enabled Aug 13 07:03:33.222078 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:03:33.225501 extend-filesystems[1532]: Found loop3 Aug 13 07:03:33.225501 extend-filesystems[1532]: Found loop4 Aug 13 07:03:33.225501 extend-filesystems[1532]: Found loop5 Aug 13 07:03:33.225501 extend-filesystems[1532]: Found sr0 Aug 13 07:03:33.225501 extend-filesystems[1532]: Found vda Aug 13 07:03:33.225501 extend-filesystems[1532]: Found vda1 Aug 13 07:03:33.225501 extend-filesystems[1532]: Found vda2 Aug 13 07:03:33.225501 extend-filesystems[1532]: Found vda3 Aug 13 07:03:33.225501 extend-filesystems[1532]: Found usr Aug 13 07:03:33.225501 extend-filesystems[1532]: Found vda4 Aug 13 07:03:33.225501 extend-filesystems[1532]: Found vda6 Aug 13 07:03:33.225501 extend-filesystems[1532]: Found vda7 Aug 13 07:03:33.225501 extend-filesystems[1532]: Found vda9 Aug 13 07:03:33.225501 extend-filesystems[1532]: Checking size of /dev/vda9 Aug 13 07:03:33.225653 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:03:33.269456 extend-filesystems[1532]: Resized partition /dev/vda9 Aug 13 07:03:33.231123 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:03:33.272530 extend-filesystems[1569]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:03:33.232821 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:03:33.236970 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:03:33.281264 update_engine[1556]: I20250813 07:03:33.265821 1556 main.cc:92] Flatcar Update Engine starting Aug 13 07:03:33.281264 update_engine[1556]: I20250813 07:03:33.267354 1556 update_check_scheduler.cc:74] Next update check in 5m3s Aug 13 07:03:33.242235 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:03:33.281665 jq[1559]: true Aug 13 07:03:33.283919 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1259) Aug 13 07:03:33.247935 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:03:33.254288 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:03:33.254634 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:03:33.258300 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:03:33.258694 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:03:33.273733 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:03:33.279266 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:03:33.279583 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:03:33.300333 jq[1574]: true Aug 13 07:03:33.302818 (ntainerd)[1575]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:03:33.354365 systemd-logind[1555]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:03:33.354397 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:03:33.357174 systemd-logind[1555]: New seat seat0. Aug 13 07:03:33.360395 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 07:03:33.362621 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 07:03:33.362112 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 07:03:33.369428 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:03:33.382442 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 07:03:33.384882 tar[1573]: linux-amd64/helm Aug 13 07:03:33.395456 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:03:33.561931 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 07:03:33.401368 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:03:33.562160 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:03:33.562160 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 07:03:33.562160 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 07:03:33.567008 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:03:33.401967 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:03:33.567270 extend-filesystems[1532]: Resized filesystem in /dev/vda9 Aug 13 07:03:33.402101 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:03:33.406009 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:03:33.406120 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:03:33.408181 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:03:33.503624 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:03:33.545396 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:03:33.558767 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:03:33.559259 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:03:33.578391 bash[1608]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:03:33.582422 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:03:33.585699 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:03:33.605779 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:03:33.676732 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:03:33.686209 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:03:33.686731 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:03:33.698198 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:03:33.747368 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:03:33.760125 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:03:33.764821 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:03:33.766686 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:03:34.148767 containerd[1575]: time="2025-08-13T07:03:34.148640347Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:03:34.233951 containerd[1575]: time="2025-08-13T07:03:34.233799894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:03:34.238590 containerd[1575]: time="2025-08-13T07:03:34.236728964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:03:34.238590 containerd[1575]: time="2025-08-13T07:03:34.236782398Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:03:34.238590 containerd[1575]: time="2025-08-13T07:03:34.236803484Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:03:34.238590 containerd[1575]: time="2025-08-13T07:03:34.237068363Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:03:34.238590 containerd[1575]: time="2025-08-13T07:03:34.237090252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:03:34.238590 containerd[1575]: time="2025-08-13T07:03:34.237170247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:03:34.238590 containerd[1575]: time="2025-08-13T07:03:34.237187918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:03:34.238590 containerd[1575]: time="2025-08-13T07:03:34.237471441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:03:34.238590 containerd[1575]: time="2025-08-13T07:03:34.237492496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:03:34.238590 containerd[1575]: time="2025-08-13T07:03:34.237509836Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:03:34.238590 containerd[1575]: time="2025-08-13T07:03:34.237519660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:03:34.238933 containerd[1575]: time="2025-08-13T07:03:34.237667163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:03:34.238933 containerd[1575]: time="2025-08-13T07:03:34.237981960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:03:34.238933 containerd[1575]: time="2025-08-13T07:03:34.238200245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:03:34.238933 containerd[1575]: time="2025-08-13T07:03:34.238216801Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:03:34.238933 containerd[1575]: time="2025-08-13T07:03:34.238342655Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:03:34.238933 containerd[1575]: time="2025-08-13T07:03:34.238414664Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:03:34.244678 containerd[1575]: time="2025-08-13T07:03:34.244658111Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:03:34.244798 containerd[1575]: time="2025-08-13T07:03:34.244766084Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:03:34.244982 containerd[1575]: time="2025-08-13T07:03:34.244965764Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:03:34.245043 containerd[1575]: time="2025-08-13T07:03:34.245030841Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:03:34.245099 containerd[1575]: time="2025-08-13T07:03:34.245086757Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:03:34.245296 containerd[1575]: time="2025-08-13T07:03:34.245279616Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:03:34.245701 containerd[1575]: time="2025-08-13T07:03:34.245683830Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:03:34.245921 containerd[1575]: time="2025-08-13T07:03:34.245903723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:03:34.245981 containerd[1575]: time="2025-08-13T07:03:34.245969713Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:03:34.246031 containerd[1575]: time="2025-08-13T07:03:34.246019672Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:03:34.246090 containerd[1575]: time="2025-08-13T07:03:34.246076019Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:03:34.246146 containerd[1575]: time="2025-08-13T07:03:34.246134134Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:03:34.246219 containerd[1575]: time="2025-08-13T07:03:34.246205268Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:03:34.246273 containerd[1575]: time="2025-08-13T07:03:34.246261887Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:03:34.246329 containerd[1575]: time="2025-08-13T07:03:34.246317440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:03:34.246379 containerd[1575]: time="2025-08-13T07:03:34.246368041Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:03:34.246435 containerd[1575]: time="2025-08-13T07:03:34.246421716Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:03:34.246483 containerd[1575]: time="2025-08-13T07:03:34.246472688Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:03:34.246558 containerd[1575]: time="2025-08-13T07:03:34.246544797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.246641 containerd[1575]: time="2025-08-13T07:03:34.246601355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.246730 containerd[1575]: time="2025-08-13T07:03:34.246716611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.246785 containerd[1575]: time="2025-08-13T07:03:34.246773570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.246869 containerd[1575]: time="2025-08-13T07:03:34.246854268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.246933 containerd[1575]: time="2025-08-13T07:03:34.246922098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.246984 containerd[1575]: time="2025-08-13T07:03:34.246973140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.247034 containerd[1575]: time="2025-08-13T07:03:34.247023471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.247088 containerd[1575]: time="2025-08-13T07:03:34.247076583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.247140 containerd[1575]: time="2025-08-13T07:03:34.247129022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.247208 containerd[1575]: time="2025-08-13T07:03:34.247194370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.247259 containerd[1575]: time="2025-08-13T07:03:34.247247984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.247312 containerd[1575]: time="2025-08-13T07:03:34.247300785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.247371 containerd[1575]: time="2025-08-13T07:03:34.247359744Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:03:34.247435 containerd[1575]: time="2025-08-13T07:03:34.247423505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.247487 containerd[1575]: time="2025-08-13T07:03:34.247475874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.247536 containerd[1575]: time="2025-08-13T07:03:34.247523923Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:03:34.247674 containerd[1575]: time="2025-08-13T07:03:34.247657783Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:03:34.247739 containerd[1575]: time="2025-08-13T07:03:34.247724880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:03:34.247786 containerd[1575]: time="2025-08-13T07:03:34.247775882Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:03:34.247858 containerd[1575]: time="2025-08-13T07:03:34.247830381Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:03:34.247921 containerd[1575]: time="2025-08-13T07:03:34.247908357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.247992 containerd[1575]: time="2025-08-13T07:03:34.247979029Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:03:34.248045 containerd[1575]: time="2025-08-13T07:03:34.248034170Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:03:34.248116 containerd[1575]: time="2025-08-13T07:03:34.248103295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:03:34.249675 containerd[1575]: time="2025-08-13T07:03:34.249603268Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:03:34.250081 containerd[1575]: time="2025-08-13T07:03:34.250061246Z" level=info msg="Connect containerd service" Aug 13 07:03:34.250236 containerd[1575]: time="2025-08-13T07:03:34.250218353Z" level=info msg="using legacy CRI server" Aug 13 07:03:34.250303 containerd[1575]: time="2025-08-13T07:03:34.250289136Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:03:34.250536 containerd[1575]: time="2025-08-13T07:03:34.250516443Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:03:34.251380 containerd[1575]: time="2025-08-13T07:03:34.251352758Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:03:34.252011 containerd[1575]: time="2025-08-13T07:03:34.251991209Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:03:34.252154 containerd[1575]: time="2025-08-13T07:03:34.252136703Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:03:34.252252 containerd[1575]: time="2025-08-13T07:03:34.252051284Z" level=info msg="Start subscribing containerd event" Aug 13 07:03:34.252382 containerd[1575]: time="2025-08-13T07:03:34.252362764Z" level=info msg="Start recovering state" Aug 13 07:03:34.252546 containerd[1575]: time="2025-08-13T07:03:34.252527054Z" level=info msg="Start event monitor" Aug 13 07:03:34.252650 containerd[1575]: time="2025-08-13T07:03:34.252633791Z" level=info msg="Start snapshots syncer" Aug 13 07:03:34.252721 containerd[1575]: time="2025-08-13T07:03:34.252707366Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:03:34.252770 containerd[1575]: time="2025-08-13T07:03:34.252759835Z" level=info msg="Start streaming server" Aug 13 07:03:34.253067 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:03:34.254422 containerd[1575]: time="2025-08-13T07:03:34.254398681Z" level=info msg="containerd successfully booted in 0.107024s" Aug 13 07:03:34.496938 tar[1573]: linux-amd64/LICENSE Aug 13 07:03:34.498052 tar[1573]: linux-amd64/README.md Aug 13 07:03:34.518978 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:03:35.257430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:03:35.261016 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:03:35.262293 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:03:35.264839 systemd[1]: Started sshd@0-10.0.0.23:22-10.0.0.1:46506.service - OpenSSH per-connection server daemon (10.0.0.1:46506). Aug 13 07:03:35.265834 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:03:35.268358 systemd[1]: Startup finished in 8.619s (kernel) + 6.108s (userspace) = 14.727s. Aug 13 07:03:35.429331 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 46506 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:03:35.431824 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:03:35.441251 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:03:35.451059 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:03:35.453640 systemd-logind[1555]: New session 1 of user core. Aug 13 07:03:35.472009 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:03:35.565483 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:03:35.572256 (systemd)[1676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:03:35.714529 systemd[1676]: Queued start job for default target default.target. Aug 13 07:03:35.715153 systemd[1676]: Created slice app.slice - User Application Slice. Aug 13 07:03:35.715174 systemd[1676]: Reached target paths.target - Paths. Aug 13 07:03:35.715187 systemd[1676]: Reached target timers.target - Timers. Aug 13 07:03:35.727093 systemd[1676]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:03:35.737950 systemd[1676]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:03:35.738026 systemd[1676]: Reached target sockets.target - Sockets. Aug 13 07:03:35.738041 systemd[1676]: Reached target basic.target - Basic System. Aug 13 07:03:35.738087 systemd[1676]: Reached target default.target - Main User Target. Aug 13 07:03:35.738123 systemd[1676]: Startup finished in 156ms. Aug 13 07:03:35.738461 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:03:35.740103 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:03:35.799064 systemd[1]: Started sshd@1-10.0.0.23:22-10.0.0.1:46522.service - OpenSSH per-connection server daemon (10.0.0.1:46522). Aug 13 07:03:35.852483 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 46522 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:03:35.854880 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:03:35.859987 systemd-logind[1555]: New session 2 of user core. Aug 13 07:03:35.881338 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:03:35.940209 sshd[1689]: pam_unix(sshd:session): session closed for user core Aug 13 07:03:35.949123 systemd[1]: Started sshd@2-10.0.0.23:22-10.0.0.1:46524.service - OpenSSH per-connection server daemon (10.0.0.1:46524). Aug 13 07:03:35.949624 systemd[1]: sshd@1-10.0.0.23:22-10.0.0.1:46522.service: Deactivated successfully. Aug 13 07:03:35.956461 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:03:35.958722 systemd-logind[1555]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:03:35.960367 systemd-logind[1555]: Removed session 2. Aug 13 07:03:35.986906 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 46524 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:03:35.989548 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:03:35.994357 systemd-logind[1555]: New session 3 of user core. Aug 13 07:03:36.019781 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:03:36.077681 sshd[1695]: pam_unix(sshd:session): session closed for user core Aug 13 07:03:36.086108 systemd[1]: Started sshd@3-10.0.0.23:22-10.0.0.1:46530.service - OpenSSH per-connection server daemon (10.0.0.1:46530). Aug 13 07:03:36.086862 systemd[1]: sshd@2-10.0.0.23:22-10.0.0.1:46524.service: Deactivated successfully. Aug 13 07:03:36.093445 systemd-logind[1555]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:03:36.094340 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:03:36.097313 systemd-logind[1555]: Removed session 3. Aug 13 07:03:36.124756 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 46530 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:03:36.126865 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:03:36.132333 systemd-logind[1555]: New session 4 of user core. Aug 13 07:03:36.141239 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:03:36.188864 kubelet[1660]: E0813 07:03:36.188722 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:03:36.194397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:03:36.194820 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:03:36.309091 sshd[1703]: pam_unix(sshd:session): session closed for user core Aug 13 07:03:36.316063 systemd[1]: Started sshd@4-10.0.0.23:22-10.0.0.1:46544.service - OpenSSH per-connection server daemon (10.0.0.1:46544). Aug 13 07:03:36.316542 systemd[1]: sshd@3-10.0.0.23:22-10.0.0.1:46530.service: Deactivated successfully. Aug 13 07:03:36.319028 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:03:36.320289 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:03:36.321365 systemd-logind[1555]: Removed session 4. Aug 13 07:03:36.344455 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 46544 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:03:36.346292 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:03:36.351043 systemd-logind[1555]: New session 5 of user core. Aug 13 07:03:36.361201 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:03:36.423237 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:03:36.423610 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:03:36.447416 sudo[1722]: pam_unix(sudo:session): session closed for user root Aug 13 07:03:36.450295 sshd[1714]: pam_unix(sshd:session): session closed for user core Aug 13 07:03:36.462341 systemd[1]: Started sshd@5-10.0.0.23:22-10.0.0.1:46550.service - OpenSSH per-connection server daemon (10.0.0.1:46550). Aug 13 07:03:36.463182 systemd[1]: sshd@4-10.0.0.23:22-10.0.0.1:46544.service: Deactivated successfully. Aug 13 07:03:36.467588 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:03:36.468787 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:03:36.469965 systemd-logind[1555]: Removed session 5. Aug 13 07:03:36.491917 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 46550 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:03:36.493559 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:03:36.497731 systemd-logind[1555]: New session 6 of user core. Aug 13 07:03:36.507104 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:03:36.562038 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:03:36.562397 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:03:36.566291 sudo[1732]: pam_unix(sudo:session): session closed for user root Aug 13 07:03:36.574340 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:03:36.574769 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:03:36.594132 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:03:36.596067 auditctl[1735]: No rules Aug 13 07:03:36.597543 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:03:36.597965 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:03:36.600181 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:03:36.634479 augenrules[1754]: No rules Aug 13 07:03:36.636526 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:03:36.638073 sudo[1731]: pam_unix(sudo:session): session closed for user root Aug 13 07:03:36.640314 sshd[1724]: pam_unix(sshd:session): session closed for user core Aug 13 07:03:36.653091 systemd[1]: Started sshd@6-10.0.0.23:22-10.0.0.1:46564.service - OpenSSH per-connection server daemon (10.0.0.1:46564). Aug 13 07:03:36.653610 systemd[1]: sshd@5-10.0.0.23:22-10.0.0.1:46550.service: Deactivated successfully. Aug 13 07:03:36.657178 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:03:36.657232 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:03:36.659310 systemd-logind[1555]: Removed session 6. Aug 13 07:03:36.681416 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 46564 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:03:36.683117 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:03:36.687316 systemd-logind[1555]: New session 7 of user core. Aug 13 07:03:36.699180 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:03:36.755157 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:03:36.755603 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:03:37.524136 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:03:37.524401 (dockerd)[1787]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:03:38.286115 dockerd[1787]: time="2025-08-13T07:03:38.286022380Z" level=info msg="Starting up" Aug 13 07:03:39.203826 dockerd[1787]: time="2025-08-13T07:03:39.203768845Z" level=info msg="Loading containers: start." Aug 13 07:03:39.317884 kernel: Initializing XFRM netlink socket Aug 13 07:03:39.430349 systemd-networkd[1248]: docker0: Link UP Aug 13 07:03:39.454804 dockerd[1787]: time="2025-08-13T07:03:39.454655338Z" level=info msg="Loading containers: done." Aug 13 07:03:39.471116 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2310716110-merged.mount: Deactivated successfully. Aug 13 07:03:39.473398 dockerd[1787]: time="2025-08-13T07:03:39.473345607Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:03:39.473509 dockerd[1787]: time="2025-08-13T07:03:39.473486526Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:03:39.473670 dockerd[1787]: time="2025-08-13T07:03:39.473639179Z" level=info msg="Daemon has completed initialization" Aug 13 07:03:39.515584 dockerd[1787]: time="2025-08-13T07:03:39.515504347Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:03:39.515742 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:03:40.545467 containerd[1575]: time="2025-08-13T07:03:40.545402077Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 07:03:41.311761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233710577.mount: Deactivated successfully. Aug 13 07:03:43.464945 containerd[1575]: time="2025-08-13T07:03:43.464861775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:43.465690 containerd[1575]: time="2025-08-13T07:03:43.465604254Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 07:03:43.467086 containerd[1575]: time="2025-08-13T07:03:43.467044632Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:43.470194 containerd[1575]: time="2025-08-13T07:03:43.470113532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:43.471399 containerd[1575]: time="2025-08-13T07:03:43.471342295Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 2.925880321s" Aug 13 07:03:43.471399 containerd[1575]: time="2025-08-13T07:03:43.471386324Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 07:03:43.472405 containerd[1575]: time="2025-08-13T07:03:43.472364600Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 07:03:45.600329 containerd[1575]: time="2025-08-13T07:03:45.600216447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:45.601244 containerd[1575]: time="2025-08-13T07:03:45.600962894Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 07:03:45.602874 containerd[1575]: time="2025-08-13T07:03:45.602815297Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:45.606477 containerd[1575]: time="2025-08-13T07:03:45.606411175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:45.608202 containerd[1575]: time="2025-08-13T07:03:45.608127271Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 2.135722648s" Aug 13 07:03:45.608202 containerd[1575]: time="2025-08-13T07:03:45.608188086Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 07:03:45.608824 containerd[1575]: time="2025-08-13T07:03:45.608795980Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 07:03:46.445180 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:03:46.466073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:03:46.753703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:03:46.759207 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:03:47.190424 kubelet[2004]: E0813 07:03:47.190252 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:03:47.197643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:03:47.198013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:03:47.493796 containerd[1575]: time="2025-08-13T07:03:47.493529899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:47.494925 containerd[1575]: time="2025-08-13T07:03:47.494780346Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 07:03:47.496382 containerd[1575]: time="2025-08-13T07:03:47.496318669Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:47.502385 containerd[1575]: time="2025-08-13T07:03:47.502326317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:47.503464 containerd[1575]: time="2025-08-13T07:03:47.503416870Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.894588176s" Aug 13 07:03:47.503464 containerd[1575]: time="2025-08-13T07:03:47.503458265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 07:03:47.504109 containerd[1575]: time="2025-08-13T07:03:47.504074447Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:03:49.364964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount117818440.mount: Deactivated successfully. Aug 13 07:03:50.792546 containerd[1575]: time="2025-08-13T07:03:50.792467138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:50.793292 containerd[1575]: time="2025-08-13T07:03:50.793211046Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 07:03:50.796445 containerd[1575]: time="2025-08-13T07:03:50.796384617Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:50.798761 containerd[1575]: time="2025-08-13T07:03:50.798729199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:50.799742 containerd[1575]: time="2025-08-13T07:03:50.799684615Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 3.295564373s" Aug 13 07:03:50.799790 containerd[1575]: time="2025-08-13T07:03:50.799744586Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 07:03:50.800267 containerd[1575]: time="2025-08-13T07:03:50.800224485Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:03:51.416486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493157713.mount: Deactivated successfully. Aug 13 07:03:53.756726 containerd[1575]: time="2025-08-13T07:03:53.756650174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:53.757457 containerd[1575]: time="2025-08-13T07:03:53.757362773Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:03:53.758632 containerd[1575]: time="2025-08-13T07:03:53.758591126Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:53.761523 containerd[1575]: time="2025-08-13T07:03:53.761486560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:53.762665 containerd[1575]: time="2025-08-13T07:03:53.762628332Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.962373864s" Aug 13 07:03:53.762665 containerd[1575]: time="2025-08-13T07:03:53.762661802Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:03:53.763223 containerd[1575]: time="2025-08-13T07:03:53.763198621Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:03:54.302799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327161533.mount: Deactivated successfully. Aug 13 07:03:54.308711 containerd[1575]: time="2025-08-13T07:03:54.308653235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:54.309452 containerd[1575]: time="2025-08-13T07:03:54.309384869Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:03:54.310662 containerd[1575]: time="2025-08-13T07:03:54.310622545Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:54.312925 containerd[1575]: time="2025-08-13T07:03:54.312900572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:54.313629 containerd[1575]: time="2025-08-13T07:03:54.313588843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 550.259954ms" Aug 13 07:03:54.313698 containerd[1575]: time="2025-08-13T07:03:54.313633660Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:03:54.314230 containerd[1575]: time="2025-08-13T07:03:54.314193614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:03:54.861190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456231495.mount: Deactivated successfully. Aug 13 07:03:56.957239 containerd[1575]: time="2025-08-13T07:03:56.957170378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:56.958017 containerd[1575]: time="2025-08-13T07:03:56.957948785Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 07:03:56.959267 containerd[1575]: time="2025-08-13T07:03:56.959232056Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:56.962647 containerd[1575]: time="2025-08-13T07:03:56.962609054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:03:56.963979 containerd[1575]: time="2025-08-13T07:03:56.963934862Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.649703241s" Aug 13 07:03:56.963979 containerd[1575]: time="2025-08-13T07:03:56.963972247Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 07:03:57.448287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:03:57.458123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:03:57.634690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:03:57.640033 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:03:57.680517 kubelet[2169]: E0813 07:03:57.680446 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:03:57.684806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:03:57.685489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:03:59.354535 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:03:59.364039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:03:59.388836 systemd[1]: Reloading requested from client PID 2191 ('systemctl') (unit session-7.scope)... Aug 13 07:03:59.388875 systemd[1]: Reloading... Aug 13 07:03:59.476930 zram_generator::config[2233]: No configuration found. Aug 13 07:03:59.707573 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:03:59.794959 systemd[1]: Reloading finished in 405 ms. Aug 13 07:03:59.838724 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:03:59.838869 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:03:59.839396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:03:59.842507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:04:00.015714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:04:00.021918 (kubelet)[2291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:04:00.064444 kubelet[2291]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:04:00.064444 kubelet[2291]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:04:00.064444 kubelet[2291]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:04:00.064939 kubelet[2291]: I0813 07:04:00.064510 2291 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:04:00.480595 kubelet[2291]: I0813 07:04:00.480443 2291 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:04:00.480595 kubelet[2291]: I0813 07:04:00.480486 2291 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:04:00.480834 kubelet[2291]: I0813 07:04:00.480793 2291 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:04:00.500034 kubelet[2291]: E0813 07:04:00.499989 2291 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:04:00.500806 kubelet[2291]: I0813 07:04:00.500773 2291 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:04:00.513649 kubelet[2291]: E0813 07:04:00.513612 2291 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:04:00.513649 kubelet[2291]: I0813 07:04:00.513642 2291 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:04:00.522795 kubelet[2291]: I0813 07:04:00.522740 2291 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:04:00.523666 kubelet[2291]: I0813 07:04:00.523640 2291 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:04:00.523893 kubelet[2291]: I0813 07:04:00.523831 2291 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:04:00.524095 kubelet[2291]: I0813 07:04:00.523897 2291 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:04:00.524302 kubelet[2291]: I0813 07:04:00.524115 2291 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:04:00.524302 kubelet[2291]: I0813 07:04:00.524125 2291 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:04:00.524345 kubelet[2291]: I0813 07:04:00.524305 2291 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:04:00.526833 kubelet[2291]: I0813 07:04:00.526783 2291 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:04:00.526833 kubelet[2291]: I0813 07:04:00.526823 2291 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:04:00.527041 kubelet[2291]: I0813 07:04:00.526884 2291 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:04:00.527041 kubelet[2291]: I0813 07:04:00.526924 2291 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:04:00.531105 kubelet[2291]: I0813 07:04:00.530787 2291 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:04:00.531895 kubelet[2291]: I0813 07:04:00.531339 2291 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:04:00.535797 kubelet[2291]: W0813 07:04:00.535752 2291 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:04:00.536767 kubelet[2291]: W0813 07:04:00.536695 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Aug 13 07:04:00.536767 kubelet[2291]: W0813 07:04:00.536734 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Aug 13 07:04:00.536997 kubelet[2291]: E0813 07:04:00.536792 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:04:00.536997 kubelet[2291]: E0813 07:04:00.536826 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:04:00.538471 kubelet[2291]: I0813 07:04:00.537970 2291 server.go:1274] "Started kubelet" Aug 13 07:04:00.538471 kubelet[2291]: I0813 07:04:00.538062 2291 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:04:00.540751 kubelet[2291]: I0813 07:04:00.539747 2291 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:04:00.540751 kubelet[2291]: I0813 07:04:00.540033 2291 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:04:00.543911 kubelet[2291]: I0813 07:04:00.542897 2291 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:04:00.543911 kubelet[2291]: I0813 07:04:00.543159 2291 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:04:00.543911 kubelet[2291]: I0813 07:04:00.543581 2291 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:04:00.544163 kubelet[2291]: E0813 07:04:00.542012 2291 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b41a13bb0f880 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:04:00.537933952 +0000 UTC m=+0.510300964,LastTimestamp:2025-08-13 07:04:00.537933952 +0000 UTC m=+0.510300964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:04:00.545596 kubelet[2291]: I0813 07:04:00.545572 2291 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:04:00.545741 kubelet[2291]: I0813 07:04:00.545717 2291 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:04:00.545819 kubelet[2291]: I0813 07:04:00.545795 2291 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:04:00.546237 kubelet[2291]: W0813 07:04:00.546196 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Aug 13 07:04:00.546378 kubelet[2291]: E0813 07:04:00.546340 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="200ms" Aug 13 07:04:00.546378 kubelet[2291]: E0813 07:04:00.546348 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:04:00.546378 kubelet[2291]: E0813 07:04:00.546287 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:04:00.546871 kubelet[2291]: I0813 07:04:00.546783 2291 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:04:00.546993 kubelet[2291]: I0813 07:04:00.546909 2291 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:04:00.547817 kubelet[2291]: E0813 07:04:00.547789 2291 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:04:00.551711 kubelet[2291]: I0813 07:04:00.551682 2291 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:04:00.566411 kubelet[2291]: I0813 07:04:00.566352 2291 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:04:00.567958 kubelet[2291]: I0813 07:04:00.567928 2291 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:04:00.568951 kubelet[2291]: I0813 07:04:00.568063 2291 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:04:00.568951 kubelet[2291]: I0813 07:04:00.568106 2291 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:04:00.568951 kubelet[2291]: E0813 07:04:00.568176 2291 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:04:00.579864 kubelet[2291]: W0813 07:04:00.579670 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Aug 13 07:04:00.579864 kubelet[2291]: E0813 07:04:00.579737 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:04:00.583229 kubelet[2291]: I0813 07:04:00.583199 2291 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:04:00.583229 kubelet[2291]: I0813 07:04:00.583226 2291 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:04:00.583304 kubelet[2291]: I0813 07:04:00.583257 2291 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:04:00.646628 kubelet[2291]: E0813 07:04:00.646574 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:04:00.668927 kubelet[2291]: E0813 07:04:00.668869 2291 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:04:00.747397 kubelet[2291]: E0813 07:04:00.747229 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:04:00.747791 kubelet[2291]: E0813 07:04:00.747754 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="400ms" Aug 13 07:04:00.848173 kubelet[2291]: E0813 07:04:00.848086 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:04:00.848454 kubelet[2291]: I0813 07:04:00.848424 2291 policy_none.go:49] "None policy: Start" Aug 13 07:04:00.849311 kubelet[2291]: I0813 07:04:00.849274 2291 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:04:00.849311 kubelet[2291]: I0813 07:04:00.849312 2291 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:04:00.869588 kubelet[2291]: E0813 07:04:00.869534 2291 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:04:00.907745 kubelet[2291]: I0813 07:04:00.906878 2291 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:04:00.907745 kubelet[2291]: I0813 07:04:00.907111 2291 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:04:00.907745 kubelet[2291]: I0813 07:04:00.907130 2291 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:04:00.907745 kubelet[2291]: I0813 07:04:00.907613 2291 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:04:00.908928 kubelet[2291]: E0813 07:04:00.908904 2291 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 07:04:01.008891 kubelet[2291]: I0813 07:04:01.008735 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:04:01.009509 kubelet[2291]: E0813 07:04:01.009359 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Aug 13 07:04:01.148943 kubelet[2291]: E0813 07:04:01.148896 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="800ms" Aug 13 07:04:01.211369 kubelet[2291]: I0813 07:04:01.211306 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:04:01.211699 kubelet[2291]: E0813 07:04:01.211668 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Aug 13 07:04:01.349923 kubelet[2291]: I0813 07:04:01.349780 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:04:01.349923 kubelet[2291]: I0813 07:04:01.349817 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/861ea5b816949d269e3052751c9d8ab8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"861ea5b816949d269e3052751c9d8ab8\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:04:01.349923 kubelet[2291]: I0813 07:04:01.349860 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/861ea5b816949d269e3052751c9d8ab8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"861ea5b816949d269e3052751c9d8ab8\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:04:01.349923 kubelet[2291]: I0813 07:04:01.349877 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:04:01.349923 kubelet[2291]: I0813 07:04:01.349899 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:04:01.350141 kubelet[2291]: I0813 07:04:01.349929 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:04:01.350141 kubelet[2291]: I0813 07:04:01.349951 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:04:01.350141 kubelet[2291]: I0813 07:04:01.349969 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/861ea5b816949d269e3052751c9d8ab8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"861ea5b816949d269e3052751c9d8ab8\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:04:01.350141 kubelet[2291]: I0813 07:04:01.350022 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:04:01.576593 kubelet[2291]: E0813 07:04:01.576551 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:01.577351 containerd[1575]: time="2025-08-13T07:04:01.577301090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:861ea5b816949d269e3052751c9d8ab8,Namespace:kube-system,Attempt:0,}" Aug 13 07:04:01.578559 kubelet[2291]: E0813 07:04:01.578521 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:01.579038 kubelet[2291]: E0813 07:04:01.579010 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:01.579098 containerd[1575]: time="2025-08-13T07:04:01.579069750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 07:04:01.579607 containerd[1575]: time="2025-08-13T07:04:01.579552172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 07:04:01.580939 kubelet[2291]: W0813 07:04:01.580901 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Aug 13 07:04:01.581140 kubelet[2291]: E0813 07:04:01.580939 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:04:01.613364 kubelet[2291]: I0813 07:04:01.613202 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:04:01.613641 kubelet[2291]: E0813 07:04:01.613580 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Aug 13 07:04:01.847506 kubelet[2291]: W0813 07:04:01.847408 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Aug 13 07:04:01.847506 kubelet[2291]: E0813 07:04:01.847512 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:04:01.950338 kubelet[2291]: E0813 07:04:01.950157 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="1.6s" Aug 13 07:04:02.119121 kubelet[2291]: W0813 07:04:02.118973 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Aug 13 07:04:02.119121 kubelet[2291]: E0813 07:04:02.119110 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:04:02.123889 kubelet[2291]: W0813 07:04:02.123815 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Aug 13 07:04:02.123889 kubelet[2291]: E0813 07:04:02.123873 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:04:02.415694 kubelet[2291]: I0813 07:04:02.415522 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:04:02.416211 kubelet[2291]: E0813 07:04:02.415913 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Aug 13 07:04:02.520142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579227360.mount: Deactivated successfully. Aug 13 07:04:02.528835 containerd[1575]: time="2025-08-13T07:04:02.528768867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:04:02.529892 containerd[1575]: time="2025-08-13T07:04:02.529834134Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:04:02.530762 containerd[1575]: time="2025-08-13T07:04:02.530732516Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:04:02.531894 containerd[1575]: time="2025-08-13T07:04:02.531827603Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:04:02.532884 containerd[1575]: time="2025-08-13T07:04:02.532831789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:04:02.533741 containerd[1575]: time="2025-08-13T07:04:02.533704010Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:04:02.534990 containerd[1575]: time="2025-08-13T07:04:02.534939391Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:04:02.541338 containerd[1575]: time="2025-08-13T07:04:02.541277294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:04:02.542165 containerd[1575]: time="2025-08-13T07:04:02.542123985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 963.002515ms" Aug 13 07:04:02.544431 containerd[1575]: time="2025-08-13T07:04:02.544381415Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 964.723585ms" Aug 13 07:04:02.545170 containerd[1575]: time="2025-08-13T07:04:02.545067795Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 967.680984ms" Aug 13 07:04:02.645610 kubelet[2291]: E0813 07:04:02.645553 2291 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:04:02.780487 containerd[1575]: time="2025-08-13T07:04:02.780030820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:04:02.780487 containerd[1575]: time="2025-08-13T07:04:02.780316863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:04:02.780487 containerd[1575]: time="2025-08-13T07:04:02.780328540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:02.781090 containerd[1575]: time="2025-08-13T07:04:02.780420472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:02.783544 containerd[1575]: time="2025-08-13T07:04:02.783010512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:04:02.783544 containerd[1575]: time="2025-08-13T07:04:02.783184185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:04:02.784446 containerd[1575]: time="2025-08-13T07:04:02.784387052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:02.784696 containerd[1575]: time="2025-08-13T07:04:02.784604685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:02.785884 containerd[1575]: time="2025-08-13T07:04:02.785452530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:04:02.785884 containerd[1575]: time="2025-08-13T07:04:02.785510354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:04:02.785884 containerd[1575]: time="2025-08-13T07:04:02.785604402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:02.785884 containerd[1575]: time="2025-08-13T07:04:02.785695212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:02.864119 containerd[1575]: time="2025-08-13T07:04:02.864062347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c600b5a79f404d11e770f2ca42d21006848e26984cfb916569fc73e80ee3c41\"" Aug 13 07:04:02.865318 kubelet[2291]: E0813 07:04:02.865201 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:02.868254 containerd[1575]: time="2025-08-13T07:04:02.868025819Z" level=info msg="CreateContainer within sandbox \"3c600b5a79f404d11e770f2ca42d21006848e26984cfb916569fc73e80ee3c41\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:04:02.868624 containerd[1575]: time="2025-08-13T07:04:02.868552439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfa2c84db5e0842eb5343237d5e35d00aff1b85ffa041636ac9f18b11b7eadf9\"" Aug 13 07:04:02.869231 kubelet[2291]: E0813 07:04:02.869189 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:02.871102 containerd[1575]: time="2025-08-13T07:04:02.871079313Z" level=info msg="CreateContainer within sandbox \"bfa2c84db5e0842eb5343237d5e35d00aff1b85ffa041636ac9f18b11b7eadf9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:04:02.900117 containerd[1575]: time="2025-08-13T07:04:02.900068188Z" level=info msg="CreateContainer within sandbox \"3c600b5a79f404d11e770f2ca42d21006848e26984cfb916569fc73e80ee3c41\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"634e17120f56147873b031ade107306157164ebb289b86337afb68d17c0ee86d\"" Aug 13 07:04:02.901237 containerd[1575]: time="2025-08-13T07:04:02.901201905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:861ea5b816949d269e3052751c9d8ab8,Namespace:kube-system,Attempt:0,} returns sandbox id \"febfc1c71abc52e7bd97e89b8cdb873daae8a36f35a25b00356eb1b6da5c5bfa\"" Aug 13 07:04:02.901460 containerd[1575]: time="2025-08-13T07:04:02.901208560Z" level=info msg="StartContainer for \"634e17120f56147873b031ade107306157164ebb289b86337afb68d17c0ee86d\"" Aug 13 07:04:02.902017 kubelet[2291]: E0813 07:04:02.901991 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:02.903672 containerd[1575]: time="2025-08-13T07:04:02.903644814Z" level=info msg="CreateContainer within sandbox \"febfc1c71abc52e7bd97e89b8cdb873daae8a36f35a25b00356eb1b6da5c5bfa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:04:02.904913 containerd[1575]: time="2025-08-13T07:04:02.904879825Z" level=info msg="CreateContainer within sandbox \"bfa2c84db5e0842eb5343237d5e35d00aff1b85ffa041636ac9f18b11b7eadf9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c10fe1849557a7c6d985c8ccb909b3c330705e91fa000511f6124b93bdfd6b6f\"" Aug 13 07:04:02.912364 containerd[1575]: time="2025-08-13T07:04:02.912318327Z" level=info msg="StartContainer for \"c10fe1849557a7c6d985c8ccb909b3c330705e91fa000511f6124b93bdfd6b6f\"" Aug 13 07:04:02.920779 containerd[1575]: time="2025-08-13T07:04:02.920722656Z" level=info msg="CreateContainer within sandbox \"febfc1c71abc52e7bd97e89b8cdb873daae8a36f35a25b00356eb1b6da5c5bfa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"13a9389c9e72daec97b6bc993bc12bb21baf6430c6fd055d27569a729f325b4b\"" Aug 13 07:04:02.921455 containerd[1575]: time="2025-08-13T07:04:02.921427228Z" level=info msg="StartContainer for \"13a9389c9e72daec97b6bc993bc12bb21baf6430c6fd055d27569a729f325b4b\"" Aug 13 07:04:02.982071 containerd[1575]: time="2025-08-13T07:04:02.981914765Z" level=info msg="StartContainer for \"c10fe1849557a7c6d985c8ccb909b3c330705e91fa000511f6124b93bdfd6b6f\" returns successfully" Aug 13 07:04:02.999493 containerd[1575]: time="2025-08-13T07:04:02.999454226Z" level=info msg="StartContainer for \"13a9389c9e72daec97b6bc993bc12bb21baf6430c6fd055d27569a729f325b4b\" returns successfully" Aug 13 07:04:03.002997 containerd[1575]: time="2025-08-13T07:04:03.002881520Z" level=info msg="StartContainer for \"634e17120f56147873b031ade107306157164ebb289b86337afb68d17c0ee86d\" returns successfully" Aug 13 07:04:03.591200 kubelet[2291]: E0813 07:04:03.591061 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:03.599483 kubelet[2291]: E0813 07:04:03.598646 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:03.601168 kubelet[2291]: E0813 07:04:03.600816 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:04.017991 kubelet[2291]: I0813 07:04:04.017787 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:04:04.277342 kubelet[2291]: E0813 07:04:04.277091 2291 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 07:04:04.409228 kubelet[2291]: E0813 07:04:04.409075 2291 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185b41a13bb0f880 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:04:00.537933952 +0000 UTC m=+0.510300964,LastTimestamp:2025-08-13 07:04:00.537933952 +0000 UTC m=+0.510300964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:04:04.467567 kubelet[2291]: I0813 07:04:04.467373 2291 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 07:04:04.467567 kubelet[2291]: E0813 07:04:04.467421 2291 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 07:04:04.475364 kubelet[2291]: E0813 07:04:04.475229 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:04:04.575819 kubelet[2291]: E0813 07:04:04.575666 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:04:04.603122 kubelet[2291]: E0813 07:04:04.603074 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:05.096494 kubelet[2291]: E0813 07:04:05.096455 2291 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:04:05.096718 kubelet[2291]: E0813 07:04:05.096672 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:05.530001 kubelet[2291]: I0813 07:04:05.529814 2291 apiserver.go:52] "Watching apiserver" Aug 13 07:04:05.546312 kubelet[2291]: I0813 07:04:05.546268 2291 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:04:05.609726 kubelet[2291]: E0813 07:04:05.609667 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:06.465914 systemd[1]: Reloading requested from client PID 2568 ('systemctl') (unit session-7.scope)... Aug 13 07:04:06.465936 systemd[1]: Reloading... Aug 13 07:04:06.552886 zram_generator::config[2610]: No configuration found. Aug 13 07:04:06.605693 kubelet[2291]: E0813 07:04:06.605655 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:06.683017 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:04:06.774740 systemd[1]: Reloading finished in 308 ms. Aug 13 07:04:06.820889 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:04:06.821116 kubelet[2291]: I0813 07:04:06.820899 2291 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:04:06.848420 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:04:06.848930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:04:06.861116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:04:07.054413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:04:07.059918 (kubelet)[2662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:04:07.100045 kubelet[2662]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:04:07.101113 kubelet[2662]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:04:07.101113 kubelet[2662]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:04:07.101113 kubelet[2662]: I0813 07:04:07.100625 2662 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:04:07.108616 kubelet[2662]: I0813 07:04:07.108572 2662 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:04:07.108616 kubelet[2662]: I0813 07:04:07.108599 2662 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:04:07.108876 kubelet[2662]: I0813 07:04:07.108803 2662 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:04:07.110168 kubelet[2662]: I0813 07:04:07.110105 2662 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:04:07.273222 kubelet[2662]: I0813 07:04:07.273151 2662 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:04:07.276093 kubelet[2662]: E0813 07:04:07.276065 2662 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:04:07.276175 kubelet[2662]: I0813 07:04:07.276094 2662 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:04:07.282294 kubelet[2662]: I0813 07:04:07.282259 2662 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:04:07.282886 kubelet[2662]: I0813 07:04:07.282865 2662 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:04:07.283054 kubelet[2662]: I0813 07:04:07.283000 2662 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:04:07.283260 kubelet[2662]: I0813 07:04:07.283051 2662 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:04:07.283382 kubelet[2662]: I0813 07:04:07.283264 2662 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:04:07.283382 kubelet[2662]: I0813 07:04:07.283274 2662 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:04:07.283382 kubelet[2662]: I0813 07:04:07.283304 2662 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:04:07.283454 kubelet[2662]: I0813 07:04:07.283438 2662 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:04:07.283454 kubelet[2662]: I0813 07:04:07.283454 2662 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:04:07.283494 kubelet[2662]: I0813 07:04:07.283489 2662 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:04:07.283525 kubelet[2662]: I0813 07:04:07.283505 2662 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:04:07.285877 kubelet[2662]: I0813 07:04:07.284531 2662 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:04:07.285877 kubelet[2662]: I0813 07:04:07.284932 2662 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:04:07.285877 kubelet[2662]: I0813 07:04:07.285433 2662 server.go:1274] "Started kubelet" Aug 13 07:04:07.285877 kubelet[2662]: I0813 07:04:07.285700 2662 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:04:07.285877 kubelet[2662]: I0813 07:04:07.285737 2662 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:04:07.286114 kubelet[2662]: I0813 07:04:07.286093 2662 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:04:07.286813 kubelet[2662]: I0813 07:04:07.286787 2662 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:04:07.289859 kubelet[2662]: I0813 07:04:07.288701 2662 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:04:07.289859 kubelet[2662]: E0813 07:04:07.289711 2662 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:04:07.294944 kubelet[2662]: I0813 07:04:07.294909 2662 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:04:07.296346 kubelet[2662]: I0813 07:04:07.296313 2662 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:04:07.296609 kubelet[2662]: I0813 07:04:07.296592 2662 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:04:07.300187 kubelet[2662]: I0813 07:04:07.298692 2662 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:04:07.300187 kubelet[2662]: I0813 07:04:07.298814 2662 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:04:07.300187 kubelet[2662]: I0813 07:04:07.300025 2662 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:04:07.303054 kubelet[2662]: I0813 07:04:07.302254 2662 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:04:07.310572 kubelet[2662]: I0813 07:04:07.310459 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:04:07.313401 kubelet[2662]: I0813 07:04:07.313370 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:04:07.313459 kubelet[2662]: I0813 07:04:07.313404 2662 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:04:07.313459 kubelet[2662]: I0813 07:04:07.313431 2662 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:04:07.313501 kubelet[2662]: E0813 07:04:07.313482 2662 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:04:07.349730 kubelet[2662]: I0813 07:04:07.349689 2662 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:04:07.349730 kubelet[2662]: I0813 07:04:07.349713 2662 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:04:07.349730 kubelet[2662]: I0813 07:04:07.349744 2662 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:04:07.349984 kubelet[2662]: I0813 07:04:07.349931 2662 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:04:07.349984 kubelet[2662]: I0813 07:04:07.349941 2662 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:04:07.349984 kubelet[2662]: I0813 07:04:07.349959 2662 policy_none.go:49] "None policy: Start" Aug 13 07:04:07.350583 kubelet[2662]: I0813 07:04:07.350564 2662 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:04:07.350583 kubelet[2662]: I0813 07:04:07.350584 2662 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:04:07.350711 kubelet[2662]: I0813 07:04:07.350696 2662 state_mem.go:75] "Updated machine memory state" Aug 13 07:04:07.352398 kubelet[2662]: I0813 07:04:07.352359 2662 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:04:07.352569 kubelet[2662]: I0813 07:04:07.352562 2662 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:04:07.352593 kubelet[2662]: I0813 07:04:07.352574 2662 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:04:07.353323 kubelet[2662]: I0813 07:04:07.353292 2662 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:04:07.422254 kubelet[2662]: E0813 07:04:07.422202 2662 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:04:07.458679 kubelet[2662]: I0813 07:04:07.458640 2662 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:04:07.466772 kubelet[2662]: I0813 07:04:07.466726 2662 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 07:04:07.466971 kubelet[2662]: I0813 07:04:07.466837 2662 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 07:04:07.468494 sudo[2697]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 07:04:07.469070 sudo[2697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 07:04:07.500523 kubelet[2662]: I0813 07:04:07.500477 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/861ea5b816949d269e3052751c9d8ab8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"861ea5b816949d269e3052751c9d8ab8\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:04:07.500523 kubelet[2662]: I0813 07:04:07.500520 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:04:07.500523 kubelet[2662]: I0813 07:04:07.500541 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:04:07.500523 kubelet[2662]: I0813 07:04:07.500557 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/861ea5b816949d269e3052751c9d8ab8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"861ea5b816949d269e3052751c9d8ab8\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:04:07.500801 kubelet[2662]: I0813 07:04:07.500600 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/861ea5b816949d269e3052751c9d8ab8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"861ea5b816949d269e3052751c9d8ab8\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:04:07.500801 kubelet[2662]: I0813 07:04:07.500661 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:04:07.500801 kubelet[2662]: I0813 07:04:07.500698 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:04:07.500801 kubelet[2662]: I0813 07:04:07.500726 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:04:07.500801 kubelet[2662]: I0813 07:04:07.500743 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:04:07.725936 kubelet[2662]: E0813 07:04:07.724246 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:07.725936 kubelet[2662]: E0813 07:04:07.724246 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:07.725936 kubelet[2662]: E0813 07:04:07.724282 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:08.075956 sudo[2697]: pam_unix(sudo:session): session closed for user root Aug 13 07:04:08.284583 kubelet[2662]: I0813 07:04:08.284527 2662 apiserver.go:52] "Watching apiserver" Aug 13 07:04:08.296862 kubelet[2662]: I0813 07:04:08.296758 2662 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:04:08.326977 kubelet[2662]: E0813 07:04:08.326717 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:08.326977 kubelet[2662]: E0813 07:04:08.326943 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:08.334165 kubelet[2662]: E0813 07:04:08.333948 2662 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:04:08.334431 kubelet[2662]: E0813 07:04:08.334349 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:08.348897 kubelet[2662]: I0813 07:04:08.348812 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.3487894599999999 podStartE2EDuration="1.34878946s" podCreationTimestamp="2025-08-13 07:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:04:08.348337867 +0000 UTC m=+1.283811414" watchObservedRunningTime="2025-08-13 07:04:08.34878946 +0000 UTC m=+1.284263007" Aug 13 07:04:08.364237 kubelet[2662]: I0813 07:04:08.363774 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.363743611 podStartE2EDuration="3.363743611s" podCreationTimestamp="2025-08-13 07:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:04:08.363713075 +0000 UTC m=+1.299186622" watchObservedRunningTime="2025-08-13 07:04:08.363743611 +0000 UTC m=+1.299217158" Aug 13 07:04:08.364237 kubelet[2662]: I0813 07:04:08.363988 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3639819389999999 podStartE2EDuration="1.363981939s" podCreationTimestamp="2025-08-13 07:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:04:08.356976194 +0000 UTC m=+1.292449741" watchObservedRunningTime="2025-08-13 07:04:08.363981939 +0000 UTC m=+1.299455487" Aug 13 07:04:09.327946 kubelet[2662]: E0813 07:04:09.327904 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:09.364077 sudo[1768]: pam_unix(sudo:session): session closed for user root Aug 13 07:04:09.366598 sshd[1760]: pam_unix(sshd:session): session closed for user core Aug 13 07:04:09.371151 systemd[1]: sshd@6-10.0.0.23:22-10.0.0.1:46564.service: Deactivated successfully. Aug 13 07:04:09.374094 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:04:09.374099 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:04:09.375888 systemd-logind[1555]: Removed session 7. Aug 13 07:04:12.074273 kubelet[2662]: E0813 07:04:12.074194 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:12.876619 kubelet[2662]: I0813 07:04:12.876569 2662 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:04:12.877180 containerd[1575]: time="2025-08-13T07:04:12.877118616Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:04:12.877635 kubelet[2662]: I0813 07:04:12.877441 2662 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:04:13.758123 kubelet[2662]: E0813 07:04:13.758076 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:13.986792 kubelet[2662]: I0813 07:04:13.986730 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-host-proc-sys-kernel\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.986792 kubelet[2662]: I0813 07:04:13.986783 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a21f3382-b073-4f49-8fe5-7677d85787f4-hubble-tls\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.986792 kubelet[2662]: I0813 07:04:13.986805 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqp9r\" (UniqueName: \"kubernetes.io/projected/a21f3382-b073-4f49-8fe5-7677d85787f4-kube-api-access-tqp9r\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.986792 kubelet[2662]: I0813 07:04:13.986824 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c3d73b7-18e4-4889-ae73-6c3998b43a1e-kube-proxy\") pod \"kube-proxy-z2pkl\" (UID: \"3c3d73b7-18e4-4889-ae73-6c3998b43a1e\") " pod="kube-system/kube-proxy-z2pkl" Aug 13 07:04:13.986792 kubelet[2662]: I0813 07:04:13.986855 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-cilium-run\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.987182 kubelet[2662]: I0813 07:04:13.986935 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a21f3382-b073-4f49-8fe5-7677d85787f4-clustermesh-secrets\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.987182 kubelet[2662]: I0813 07:04:13.987005 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-bpf-maps\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.987182 kubelet[2662]: I0813 07:04:13.987040 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-xtables-lock\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.987182 kubelet[2662]: I0813 07:04:13.987072 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cllj\" (UniqueName: \"kubernetes.io/projected/3c3d73b7-18e4-4889-ae73-6c3998b43a1e-kube-api-access-5cllj\") pod \"kube-proxy-z2pkl\" (UID: \"3c3d73b7-18e4-4889-ae73-6c3998b43a1e\") " pod="kube-system/kube-proxy-z2pkl" Aug 13 07:04:13.987182 kubelet[2662]: I0813 07:04:13.987114 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c3d73b7-18e4-4889-ae73-6c3998b43a1e-xtables-lock\") pod \"kube-proxy-z2pkl\" (UID: \"3c3d73b7-18e4-4889-ae73-6c3998b43a1e\") " pod="kube-system/kube-proxy-z2pkl" Aug 13 07:04:13.987182 kubelet[2662]: I0813 07:04:13.987134 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-hostproc\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.987313 kubelet[2662]: I0813 07:04:13.987150 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a21f3382-b073-4f49-8fe5-7677d85787f4-cilium-config-path\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.987313 kubelet[2662]: I0813 07:04:13.987164 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-host-proc-sys-net\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.987313 kubelet[2662]: I0813 07:04:13.987180 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-cilium-cgroup\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.987313 kubelet[2662]: I0813 07:04:13.987194 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-lib-modules\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.987313 kubelet[2662]: I0813 07:04:13.987209 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c3d73b7-18e4-4889-ae73-6c3998b43a1e-lib-modules\") pod \"kube-proxy-z2pkl\" (UID: \"3c3d73b7-18e4-4889-ae73-6c3998b43a1e\") " pod="kube-system/kube-proxy-z2pkl" Aug 13 07:04:13.987313 kubelet[2662]: I0813 07:04:13.987225 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-etc-cni-netd\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:13.987438 kubelet[2662]: I0813 07:04:13.987241 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-cni-path\") pod \"cilium-zlw4j\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " pod="kube-system/cilium-zlw4j" Aug 13 07:04:14.087978 kubelet[2662]: I0813 07:04:14.087902 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rnfj\" (UniqueName: \"kubernetes.io/projected/7aeb2336-7c3a-4ae6-8767-81d84e19de2d-kube-api-access-5rnfj\") pod \"cilium-operator-5d85765b45-4nmhg\" (UID: \"7aeb2336-7c3a-4ae6-8767-81d84e19de2d\") " pod="kube-system/cilium-operator-5d85765b45-4nmhg" Aug 13 07:04:14.088270 kubelet[2662]: I0813 07:04:14.088055 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7aeb2336-7c3a-4ae6-8767-81d84e19de2d-cilium-config-path\") pod \"cilium-operator-5d85765b45-4nmhg\" (UID: \"7aeb2336-7c3a-4ae6-8767-81d84e19de2d\") " pod="kube-system/cilium-operator-5d85765b45-4nmhg" Aug 13 07:04:14.176082 kubelet[2662]: E0813 07:04:14.176022 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:14.176614 containerd[1575]: time="2025-08-13T07:04:14.176557675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2pkl,Uid:3c3d73b7-18e4-4889-ae73-6c3998b43a1e,Namespace:kube-system,Attempt:0,}" Aug 13 07:04:14.179965 kubelet[2662]: E0813 07:04:14.179904 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:14.180393 containerd[1575]: time="2025-08-13T07:04:14.180339655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlw4j,Uid:a21f3382-b073-4f49-8fe5-7677d85787f4,Namespace:kube-system,Attempt:0,}" Aug 13 07:04:14.215803 containerd[1575]: time="2025-08-13T07:04:14.215665998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:04:14.215803 containerd[1575]: time="2025-08-13T07:04:14.215738619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:04:14.215803 containerd[1575]: time="2025-08-13T07:04:14.215753209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:14.216238 containerd[1575]: time="2025-08-13T07:04:14.216166698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:14.220922 containerd[1575]: time="2025-08-13T07:04:14.220202526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:04:14.221079 containerd[1575]: time="2025-08-13T07:04:14.221048073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:04:14.221171 containerd[1575]: time="2025-08-13T07:04:14.221140195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:14.221344 containerd[1575]: time="2025-08-13T07:04:14.221309507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:14.261038 containerd[1575]: time="2025-08-13T07:04:14.260754520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlw4j,Uid:a21f3382-b073-4f49-8fe5-7677d85787f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\"" Aug 13 07:04:14.262077 kubelet[2662]: E0813 07:04:14.262056 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:14.263835 containerd[1575]: time="2025-08-13T07:04:14.263633113Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 07:04:14.266150 containerd[1575]: time="2025-08-13T07:04:14.266119511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2pkl,Uid:3c3d73b7-18e4-4889-ae73-6c3998b43a1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b924d231b07b969336c22103b0efcc25147e5635c9503f55bd48d1a707b704d9\"" Aug 13 07:04:14.267061 kubelet[2662]: E0813 07:04:14.267038 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:14.269059 containerd[1575]: time="2025-08-13T07:04:14.269014367Z" level=info msg="CreateContainer within sandbox \"b924d231b07b969336c22103b0efcc25147e5635c9503f55bd48d1a707b704d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:04:14.288889 containerd[1575]: time="2025-08-13T07:04:14.288811424Z" level=info msg="CreateContainer within sandbox \"b924d231b07b969336c22103b0efcc25147e5635c9503f55bd48d1a707b704d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5edb6b575cf4c76e8f03d9b5d2d615548f1540aea4fae85167101d61c2e98359\"" Aug 13 07:04:14.289710 containerd[1575]: time="2025-08-13T07:04:14.289348109Z" level=info msg="StartContainer for \"5edb6b575cf4c76e8f03d9b5d2d615548f1540aea4fae85167101d61c2e98359\"" Aug 13 07:04:14.329859 kubelet[2662]: E0813 07:04:14.329805 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:14.331430 containerd[1575]: time="2025-08-13T07:04:14.331375930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4nmhg,Uid:7aeb2336-7c3a-4ae6-8767-81d84e19de2d,Namespace:kube-system,Attempt:0,}" Aug 13 07:04:14.341048 kubelet[2662]: E0813 07:04:14.340926 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:14.358904 containerd[1575]: time="2025-08-13T07:04:14.358836335Z" level=info msg="StartContainer for \"5edb6b575cf4c76e8f03d9b5d2d615548f1540aea4fae85167101d61c2e98359\" returns successfully" Aug 13 07:04:14.364574 containerd[1575]: time="2025-08-13T07:04:14.364238572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:04:14.364574 containerd[1575]: time="2025-08-13T07:04:14.364316444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:04:14.364574 containerd[1575]: time="2025-08-13T07:04:14.364376059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:14.364574 containerd[1575]: time="2025-08-13T07:04:14.364472679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:14.419725 containerd[1575]: time="2025-08-13T07:04:14.419634359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4nmhg,Uid:7aeb2336-7c3a-4ae6-8767-81d84e19de2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5a40b8b699a676391f1daf902e50c854c85f2ff289e77743a0db303ce7e10ee\"" Aug 13 07:04:14.420898 kubelet[2662]: E0813 07:04:14.420666 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:14.794736 kubelet[2662]: E0813 07:04:14.794576 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:15.343925 kubelet[2662]: E0813 07:04:15.343867 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:15.344577 kubelet[2662]: E0813 07:04:15.344551 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:15.354112 kubelet[2662]: I0813 07:04:15.354013 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z2pkl" podStartSLOduration=2.35398578 podStartE2EDuration="2.35398578s" podCreationTimestamp="2025-08-13 07:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:04:15.353749592 +0000 UTC m=+8.289223139" watchObservedRunningTime="2025-08-13 07:04:15.35398578 +0000 UTC m=+8.289459327" Aug 13 07:04:16.346412 kubelet[2662]: E0813 07:04:16.346376 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:17.960987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1662191495.mount: Deactivated successfully. Aug 13 07:04:18.550005 update_engine[1556]: I20250813 07:04:18.549902 1556 update_attempter.cc:509] Updating boot flags... Aug 13 07:04:18.818945 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3069) Aug 13 07:04:18.880896 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3073) Aug 13 07:04:18.929872 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3073) Aug 13 07:04:19.693914 containerd[1575]: time="2025-08-13T07:04:19.693800498Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:04:19.694743 containerd[1575]: time="2025-08-13T07:04:19.694528930Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 07:04:19.695940 containerd[1575]: time="2025-08-13T07:04:19.695857617Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:04:19.697964 containerd[1575]: time="2025-08-13T07:04:19.697921661Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.434253485s" Aug 13 07:04:19.697964 containerd[1575]: time="2025-08-13T07:04:19.697957834Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 07:04:19.699673 containerd[1575]: time="2025-08-13T07:04:19.699637030Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 07:04:19.701124 containerd[1575]: time="2025-08-13T07:04:19.701073475Z" level=info msg="CreateContainer within sandbox \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:04:19.715093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569551532.mount: Deactivated successfully. Aug 13 07:04:19.715770 containerd[1575]: time="2025-08-13T07:04:19.715680955Z" level=info msg="CreateContainer within sandbox \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361\"" Aug 13 07:04:19.716496 containerd[1575]: time="2025-08-13T07:04:19.716390259Z" level=info msg="StartContainer for \"4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361\"" Aug 13 07:04:19.777723 containerd[1575]: time="2025-08-13T07:04:19.777660260Z" level=info msg="StartContainer for \"4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361\" returns successfully" Aug 13 07:04:20.306395 containerd[1575]: time="2025-08-13T07:04:20.306307702Z" level=info msg="shim disconnected" id=4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361 namespace=k8s.io Aug 13 07:04:20.306395 containerd[1575]: time="2025-08-13T07:04:20.306389978Z" level=warning msg="cleaning up after shim disconnected" id=4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361 namespace=k8s.io Aug 13 07:04:20.306395 containerd[1575]: time="2025-08-13T07:04:20.306401832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:04:20.355045 kubelet[2662]: E0813 07:04:20.355004 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:20.357086 containerd[1575]: time="2025-08-13T07:04:20.357041780Z" level=info msg="CreateContainer within sandbox \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:04:20.374218 containerd[1575]: time="2025-08-13T07:04:20.374151813Z" level=info msg="CreateContainer within sandbox \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd\"" Aug 13 07:04:20.377866 containerd[1575]: time="2025-08-13T07:04:20.374701590Z" level=info msg="StartContainer for \"d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd\"" Aug 13 07:04:20.453819 containerd[1575]: time="2025-08-13T07:04:20.453758281Z" level=info msg="StartContainer for \"d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd\" returns successfully" Aug 13 07:04:20.466639 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:04:20.467159 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:04:20.467302 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:04:20.474431 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:04:20.494758 containerd[1575]: time="2025-08-13T07:04:20.494686680Z" level=info msg="shim disconnected" id=d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd namespace=k8s.io Aug 13 07:04:20.494979 containerd[1575]: time="2025-08-13T07:04:20.494763935Z" level=warning msg="cleaning up after shim disconnected" id=d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd namespace=k8s.io Aug 13 07:04:20.494979 containerd[1575]: time="2025-08-13T07:04:20.494777874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:04:20.495979 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:04:20.712388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361-rootfs.mount: Deactivated successfully. Aug 13 07:04:21.358927 kubelet[2662]: E0813 07:04:21.358794 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:21.360707 containerd[1575]: time="2025-08-13T07:04:21.360658025Z" level=info msg="CreateContainer within sandbox \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:04:21.381992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530320058.mount: Deactivated successfully. Aug 13 07:04:21.384583 containerd[1575]: time="2025-08-13T07:04:21.384515866Z" level=info msg="CreateContainer within sandbox \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7\"" Aug 13 07:04:21.385612 containerd[1575]: time="2025-08-13T07:04:21.385366410Z" level=info msg="StartContainer for \"a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7\"" Aug 13 07:04:21.460515 containerd[1575]: time="2025-08-13T07:04:21.460463933Z" level=info msg="StartContainer for \"a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7\" returns successfully" Aug 13 07:04:21.513277 containerd[1575]: time="2025-08-13T07:04:21.513170545Z" level=info msg="shim disconnected" id=a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7 namespace=k8s.io Aug 13 07:04:21.513277 containerd[1575]: time="2025-08-13T07:04:21.513251878Z" level=warning msg="cleaning up after shim disconnected" id=a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7 namespace=k8s.io Aug 13 07:04:21.513277 containerd[1575]: time="2025-08-13T07:04:21.513264543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:04:21.712225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7-rootfs.mount: Deactivated successfully. Aug 13 07:04:21.897863 containerd[1575]: time="2025-08-13T07:04:21.897784196Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:04:21.898662 containerd[1575]: time="2025-08-13T07:04:21.898598899Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 07:04:21.899957 containerd[1575]: time="2025-08-13T07:04:21.899931610Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:04:21.901330 containerd[1575]: time="2025-08-13T07:04:21.901297178Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.201630128s" Aug 13 07:04:21.901391 containerd[1575]: time="2025-08-13T07:04:21.901332288Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 07:04:21.903867 containerd[1575]: time="2025-08-13T07:04:21.903814463Z" level=info msg="CreateContainer within sandbox \"f5a40b8b699a676391f1daf902e50c854c85f2ff289e77743a0db303ce7e10ee\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 07:04:21.916598 containerd[1575]: time="2025-08-13T07:04:21.916519328Z" level=info msg="CreateContainer within sandbox \"f5a40b8b699a676391f1daf902e50c854c85f2ff289e77743a0db303ce7e10ee\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\"" Aug 13 07:04:21.917354 containerd[1575]: time="2025-08-13T07:04:21.917216175Z" level=info msg="StartContainer for \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\"" Aug 13 07:04:21.978268 containerd[1575]: time="2025-08-13T07:04:21.977878023Z" level=info msg="StartContainer for \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\" returns successfully" Aug 13 07:04:22.082032 kubelet[2662]: E0813 07:04:22.081368 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:22.361250 kubelet[2662]: E0813 07:04:22.361212 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:22.364458 kubelet[2662]: E0813 07:04:22.364413 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:22.366538 containerd[1575]: time="2025-08-13T07:04:22.366490075Z" level=info msg="CreateContainer within sandbox \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:04:22.378827 kubelet[2662]: I0813 07:04:22.378651 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-4nmhg" podStartSLOduration=0.897511451 podStartE2EDuration="8.378625881s" podCreationTimestamp="2025-08-13 07:04:14 +0000 UTC" firstStartedPulling="2025-08-13 07:04:14.421123694 +0000 UTC m=+7.356597241" lastFinishedPulling="2025-08-13 07:04:21.902238124 +0000 UTC m=+14.837711671" observedRunningTime="2025-08-13 07:04:22.37809412 +0000 UTC m=+15.313567677" watchObservedRunningTime="2025-08-13 07:04:22.378625881 +0000 UTC m=+15.314099438" Aug 13 07:04:22.401398 containerd[1575]: time="2025-08-13T07:04:22.399899308Z" level=info msg="CreateContainer within sandbox \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e\"" Aug 13 07:04:22.401688 containerd[1575]: time="2025-08-13T07:04:22.401640063Z" level=info msg="StartContainer for \"1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e\"" Aug 13 07:04:22.496967 containerd[1575]: time="2025-08-13T07:04:22.496897240Z" level=info msg="StartContainer for \"1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e\" returns successfully" Aug 13 07:04:22.761386 containerd[1575]: time="2025-08-13T07:04:22.760767340Z" level=info msg="shim disconnected" id=1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e namespace=k8s.io Aug 13 07:04:22.761386 containerd[1575]: time="2025-08-13T07:04:22.760884814Z" level=warning msg="cleaning up after shim disconnected" id=1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e namespace=k8s.io Aug 13 07:04:22.761386 containerd[1575]: time="2025-08-13T07:04:22.760898622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:04:22.812877 containerd[1575]: time="2025-08-13T07:04:22.810072275Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:04:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:04:23.369489 kubelet[2662]: E0813 07:04:23.369446 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:23.370038 kubelet[2662]: E0813 07:04:23.369690 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:23.372043 containerd[1575]: time="2025-08-13T07:04:23.371983767Z" level=info msg="CreateContainer within sandbox \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:04:23.388917 containerd[1575]: time="2025-08-13T07:04:23.388839236Z" level=info msg="CreateContainer within sandbox \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\"" Aug 13 07:04:23.389934 containerd[1575]: time="2025-08-13T07:04:23.389600150Z" level=info msg="StartContainer for \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\"" Aug 13 07:04:23.470135 containerd[1575]: time="2025-08-13T07:04:23.470071239Z" level=info msg="StartContainer for \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\" returns successfully" Aug 13 07:04:23.605451 kubelet[2662]: I0813 07:04:23.605371 2662 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:04:23.758927 kubelet[2662]: I0813 07:04:23.758769 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c5684a5-0de3-4187-994f-92e9fe921808-config-volume\") pod \"coredns-7c65d6cfc9-g6ws9\" (UID: \"2c5684a5-0de3-4187-994f-92e9fe921808\") " pod="kube-system/coredns-7c65d6cfc9-g6ws9" Aug 13 07:04:23.758927 kubelet[2662]: I0813 07:04:23.758809 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wxcw\" (UniqueName: \"kubernetes.io/projected/2c5684a5-0de3-4187-994f-92e9fe921808-kube-api-access-9wxcw\") pod \"coredns-7c65d6cfc9-g6ws9\" (UID: \"2c5684a5-0de3-4187-994f-92e9fe921808\") " pod="kube-system/coredns-7c65d6cfc9-g6ws9" Aug 13 07:04:23.758927 kubelet[2662]: I0813 07:04:23.758829 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7ncc\" (UniqueName: \"kubernetes.io/projected/12fa3bb0-d8db-4978-b24c-e17e8c6d58b3-kube-api-access-s7ncc\") pod \"coredns-7c65d6cfc9-czmh6\" (UID: \"12fa3bb0-d8db-4978-b24c-e17e8c6d58b3\") " pod="kube-system/coredns-7c65d6cfc9-czmh6" Aug 13 07:04:23.758927 kubelet[2662]: I0813 07:04:23.758861 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12fa3bb0-d8db-4978-b24c-e17e8c6d58b3-config-volume\") pod \"coredns-7c65d6cfc9-czmh6\" (UID: \"12fa3bb0-d8db-4978-b24c-e17e8c6d58b3\") " pod="kube-system/coredns-7c65d6cfc9-czmh6" Aug 13 07:04:23.927883 kubelet[2662]: E0813 07:04:23.927815 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:23.928992 containerd[1575]: time="2025-08-13T07:04:23.928666453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-czmh6,Uid:12fa3bb0-d8db-4978-b24c-e17e8c6d58b3,Namespace:kube-system,Attempt:0,}" Aug 13 07:04:23.933795 kubelet[2662]: E0813 07:04:23.933753 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:23.934555 containerd[1575]: time="2025-08-13T07:04:23.934362319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g6ws9,Uid:2c5684a5-0de3-4187-994f-92e9fe921808,Namespace:kube-system,Attempt:0,}" Aug 13 07:04:24.375021 kubelet[2662]: E0813 07:04:24.374981 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:24.388703 kubelet[2662]: I0813 07:04:24.388649 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zlw4j" podStartSLOduration=5.95227479 podStartE2EDuration="11.388626499s" podCreationTimestamp="2025-08-13 07:04:13 +0000 UTC" firstStartedPulling="2025-08-13 07:04:14.26295302 +0000 UTC m=+7.198426557" lastFinishedPulling="2025-08-13 07:04:19.699304718 +0000 UTC m=+12.634778266" observedRunningTime="2025-08-13 07:04:24.387685013 +0000 UTC m=+17.323158560" watchObservedRunningTime="2025-08-13 07:04:24.388626499 +0000 UTC m=+17.324100046" Aug 13 07:04:25.376478 kubelet[2662]: E0813 07:04:25.376423 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:25.655283 systemd-networkd[1248]: cilium_host: Link UP Aug 13 07:04:25.655512 systemd-networkd[1248]: cilium_net: Link UP Aug 13 07:04:25.655517 systemd-networkd[1248]: cilium_net: Gained carrier Aug 13 07:04:25.655781 systemd-networkd[1248]: cilium_host: Gained carrier Aug 13 07:04:25.656077 systemd-networkd[1248]: cilium_host: Gained IPv6LL Aug 13 07:04:25.770770 systemd-networkd[1248]: cilium_vxlan: Link UP Aug 13 07:04:25.770779 systemd-networkd[1248]: cilium_vxlan: Gained carrier Aug 13 07:04:25.994878 kernel: NET: Registered PF_ALG protocol family Aug 13 07:04:26.068037 systemd-networkd[1248]: cilium_net: Gained IPv6LL Aug 13 07:04:26.379084 kubelet[2662]: E0813 07:04:26.378947 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:26.703075 systemd-networkd[1248]: lxc_health: Link UP Aug 13 07:04:26.709511 systemd-networkd[1248]: lxc_health: Gained carrier Aug 13 07:04:26.884077 systemd-networkd[1248]: cilium_vxlan: Gained IPv6LL Aug 13 07:04:27.009811 systemd-networkd[1248]: lxcf5b42d3d8105: Link UP Aug 13 07:04:27.015756 systemd-networkd[1248]: lxc3d23f01a2d5e: Link UP Aug 13 07:04:27.022916 kernel: eth0: renamed from tmped153 Aug 13 07:04:27.030872 kernel: eth0: renamed from tmp013c1 Aug 13 07:04:27.037736 systemd-networkd[1248]: lxcf5b42d3d8105: Gained carrier Aug 13 07:04:27.038021 systemd-networkd[1248]: lxc3d23f01a2d5e: Gained carrier Aug 13 07:04:28.182206 kubelet[2662]: E0813 07:04:28.181927 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:28.420115 systemd-networkd[1248]: lxcf5b42d3d8105: Gained IPv6LL Aug 13 07:04:28.420527 systemd-networkd[1248]: lxc_health: Gained IPv6LL Aug 13 07:04:28.868066 systemd-networkd[1248]: lxc3d23f01a2d5e: Gained IPv6LL Aug 13 07:04:30.505898 containerd[1575]: time="2025-08-13T07:04:30.505760142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:04:30.505898 containerd[1575]: time="2025-08-13T07:04:30.505834747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:04:30.505898 containerd[1575]: time="2025-08-13T07:04:30.505873192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:30.506505 containerd[1575]: time="2025-08-13T07:04:30.506215539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:30.514785 containerd[1575]: time="2025-08-13T07:04:30.514459451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:04:30.514785 containerd[1575]: time="2025-08-13T07:04:30.514554196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:04:30.514785 containerd[1575]: time="2025-08-13T07:04:30.514572170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:30.515226 containerd[1575]: time="2025-08-13T07:04:30.514711241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:04:30.539526 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:04:30.544616 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:04:30.572292 containerd[1575]: time="2025-08-13T07:04:30.572221577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g6ws9,Uid:2c5684a5-0de3-4187-994f-92e9fe921808,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed1538331b7b1eda80037766629d46c85540f143d6df2e7a51310cb30179fa0f\"" Aug 13 07:04:30.573341 kubelet[2662]: E0813 07:04:30.573309 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:30.574463 containerd[1575]: time="2025-08-13T07:04:30.574312378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-czmh6,Uid:12fa3bb0-d8db-4978-b24c-e17e8c6d58b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"013c127fd905d31c07fea8aa347168afc1aa489f89b973de1a9c10f1c884b78d\"" Aug 13 07:04:30.575482 kubelet[2662]: E0813 07:04:30.575444 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:30.576120 containerd[1575]: time="2025-08-13T07:04:30.575772150Z" level=info msg="CreateContainer within sandbox \"ed1538331b7b1eda80037766629d46c85540f143d6df2e7a51310cb30179fa0f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:04:30.577795 containerd[1575]: time="2025-08-13T07:04:30.577747536Z" level=info msg="CreateContainer within sandbox \"013c127fd905d31c07fea8aa347168afc1aa489f89b973de1a9c10f1c884b78d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:04:30.647313 containerd[1575]: time="2025-08-13T07:04:30.647225104Z" level=info msg="CreateContainer within sandbox \"013c127fd905d31c07fea8aa347168afc1aa489f89b973de1a9c10f1c884b78d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8d81bf9571db6958cb72fae79b857aac1fa0ebd5665681552e8b5cd52ac3f0f\"" Aug 13 07:04:30.647992 containerd[1575]: time="2025-08-13T07:04:30.647942611Z" level=info msg="StartContainer for \"f8d81bf9571db6958cb72fae79b857aac1fa0ebd5665681552e8b5cd52ac3f0f\"" Aug 13 07:04:30.699003 containerd[1575]: time="2025-08-13T07:04:30.698934022Z" level=info msg="CreateContainer within sandbox \"ed1538331b7b1eda80037766629d46c85540f143d6df2e7a51310cb30179fa0f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"216139133fc53b7659b6955f60fbafb2bef0069da9e0ff0d2cf5b6447e78f558\"" Aug 13 07:04:30.700041 containerd[1575]: time="2025-08-13T07:04:30.699968326Z" level=info msg="StartContainer for \"216139133fc53b7659b6955f60fbafb2bef0069da9e0ff0d2cf5b6447e78f558\"" Aug 13 07:04:30.722252 containerd[1575]: time="2025-08-13T07:04:30.722149998Z" level=info msg="StartContainer for \"f8d81bf9571db6958cb72fae79b857aac1fa0ebd5665681552e8b5cd52ac3f0f\" returns successfully" Aug 13 07:04:30.768150 containerd[1575]: time="2025-08-13T07:04:30.767947095Z" level=info msg="StartContainer for \"216139133fc53b7659b6955f60fbafb2bef0069da9e0ff0d2cf5b6447e78f558\" returns successfully" Aug 13 07:04:31.390947 kubelet[2662]: E0813 07:04:31.390809 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:31.393680 kubelet[2662]: E0813 07:04:31.393646 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:31.402819 kubelet[2662]: I0813 07:04:31.402748 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-czmh6" podStartSLOduration=17.402724952 podStartE2EDuration="17.402724952s" podCreationTimestamp="2025-08-13 07:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:04:31.401530833 +0000 UTC m=+24.337004380" watchObservedRunningTime="2025-08-13 07:04:31.402724952 +0000 UTC m=+24.338198499" Aug 13 07:04:32.151086 systemd[1]: Started sshd@7-10.0.0.23:22-10.0.0.1:50474.service - OpenSSH per-connection server daemon (10.0.0.1:50474). Aug 13 07:04:32.185345 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 50474 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:04:32.187219 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:04:32.191945 systemd-logind[1555]: New session 8 of user core. Aug 13 07:04:32.201240 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:04:32.396544 kubelet[2662]: E0813 07:04:32.396165 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:32.396544 kubelet[2662]: E0813 07:04:32.396243 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:32.542099 sshd[4062]: pam_unix(sshd:session): session closed for user core Aug 13 07:04:32.547018 systemd[1]: sshd@7-10.0.0.23:22-10.0.0.1:50474.service: Deactivated successfully. Aug 13 07:04:32.549610 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:04:32.549716 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:04:32.551134 systemd-logind[1555]: Removed session 8. Aug 13 07:04:33.397820 kubelet[2662]: E0813 07:04:33.397784 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:33.398294 kubelet[2662]: E0813 07:04:33.398070 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:37.554058 systemd[1]: Started sshd@8-10.0.0.23:22-10.0.0.1:50488.service - OpenSSH per-connection server daemon (10.0.0.1:50488). Aug 13 07:04:37.583735 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 50488 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:04:37.585324 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:04:37.589611 systemd-logind[1555]: New session 9 of user core. Aug 13 07:04:37.599145 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:04:37.707822 sshd[4079]: pam_unix(sshd:session): session closed for user core Aug 13 07:04:37.712543 systemd[1]: sshd@8-10.0.0.23:22-10.0.0.1:50488.service: Deactivated successfully. Aug 13 07:04:37.714821 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:04:37.714895 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:04:37.716140 systemd-logind[1555]: Removed session 9. Aug 13 07:04:42.723190 systemd[1]: Started sshd@9-10.0.0.23:22-10.0.0.1:57586.service - OpenSSH per-connection server daemon (10.0.0.1:57586). Aug 13 07:04:42.755646 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 57586 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:04:42.757610 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:04:42.762000 systemd-logind[1555]: New session 10 of user core. Aug 13 07:04:42.771347 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:04:42.816331 kubelet[2662]: I0813 07:04:42.816270 2662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:04:42.817241 kubelet[2662]: E0813 07:04:42.816964 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:42.839330 kubelet[2662]: I0813 07:04:42.839240 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-g6ws9" podStartSLOduration=28.839215202 podStartE2EDuration="28.839215202s" podCreationTimestamp="2025-08-13 07:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:04:31.434627699 +0000 UTC m=+24.370101246" watchObservedRunningTime="2025-08-13 07:04:42.839215202 +0000 UTC m=+35.774688749" Aug 13 07:04:42.896913 sshd[4095]: pam_unix(sshd:session): session closed for user core Aug 13 07:04:42.902264 systemd[1]: sshd@9-10.0.0.23:22-10.0.0.1:57586.service: Deactivated successfully. Aug 13 07:04:42.905102 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:04:42.905196 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:04:42.906788 systemd-logind[1555]: Removed session 10. Aug 13 07:04:43.419635 kubelet[2662]: E0813 07:04:43.419587 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:04:47.906187 systemd[1]: Started sshd@10-10.0.0.23:22-10.0.0.1:57592.service - OpenSSH per-connection server daemon (10.0.0.1:57592). Aug 13 07:04:47.940789 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 57592 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:04:47.942733 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:04:47.947855 systemd-logind[1555]: New session 11 of user core. Aug 13 07:04:47.958114 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:04:48.074905 sshd[4115]: pam_unix(sshd:session): session closed for user core Aug 13 07:04:48.084125 systemd[1]: Started sshd@11-10.0.0.23:22-10.0.0.1:56546.service - OpenSSH per-connection server daemon (10.0.0.1:56546). Aug 13 07:04:48.084825 systemd[1]: sshd@10-10.0.0.23:22-10.0.0.1:57592.service: Deactivated successfully. Aug 13 07:04:48.087027 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:04:48.089223 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:04:48.090432 systemd-logind[1555]: Removed session 11. Aug 13 07:04:48.112433 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 56546 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:04:48.114032 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:04:48.118559 systemd-logind[1555]: New session 12 of user core. Aug 13 07:04:48.127112 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:04:48.269745 sshd[4128]: pam_unix(sshd:session): session closed for user core Aug 13 07:04:48.280857 systemd[1]: Started sshd@12-10.0.0.23:22-10.0.0.1:56548.service - OpenSSH per-connection server daemon (10.0.0.1:56548). Aug 13 07:04:48.283550 systemd[1]: sshd@11-10.0.0.23:22-10.0.0.1:56546.service: Deactivated successfully. Aug 13 07:04:48.286789 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:04:48.290561 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:04:48.295294 systemd-logind[1555]: Removed session 12. Aug 13 07:04:48.319434 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 56548 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:04:48.321102 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:04:48.326124 systemd-logind[1555]: New session 13 of user core. Aug 13 07:04:48.339124 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:04:48.454173 sshd[4141]: pam_unix(sshd:session): session closed for user core Aug 13 07:04:48.458495 systemd[1]: sshd@12-10.0.0.23:22-10.0.0.1:56548.service: Deactivated successfully. Aug 13 07:04:48.461216 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:04:48.461448 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:04:48.462586 systemd-logind[1555]: Removed session 13. Aug 13 07:04:53.465210 systemd[1]: Started sshd@13-10.0.0.23:22-10.0.0.1:56554.service - OpenSSH per-connection server daemon (10.0.0.1:56554). Aug 13 07:04:53.493307 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 56554 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:04:53.495178 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:04:53.499465 systemd-logind[1555]: New session 14 of user core. Aug 13 07:04:53.515115 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:04:53.636776 sshd[4160]: pam_unix(sshd:session): session closed for user core Aug 13 07:04:53.641121 systemd[1]: sshd@13-10.0.0.23:22-10.0.0.1:56554.service: Deactivated successfully. Aug 13 07:04:53.643743 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:04:53.644005 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:04:53.645136 systemd-logind[1555]: Removed session 14. Aug 13 07:04:58.657157 systemd[1]: Started sshd@14-10.0.0.23:22-10.0.0.1:57368.service - OpenSSH per-connection server daemon (10.0.0.1:57368). Aug 13 07:04:58.685404 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 57368 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:04:58.687337 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:04:58.692259 systemd-logind[1555]: New session 15 of user core. Aug 13 07:04:58.699153 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:04:58.803782 sshd[4175]: pam_unix(sshd:session): session closed for user core Aug 13 07:04:58.807516 systemd[1]: sshd@14-10.0.0.23:22-10.0.0.1:57368.service: Deactivated successfully. Aug 13 07:04:58.810208 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:04:58.810998 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:04:58.811953 systemd-logind[1555]: Removed session 15. Aug 13 07:05:03.818079 systemd[1]: Started sshd@15-10.0.0.23:22-10.0.0.1:57376.service - OpenSSH per-connection server daemon (10.0.0.1:57376). Aug 13 07:05:03.845524 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 57376 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:03.847427 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:03.851744 systemd-logind[1555]: New session 16 of user core. Aug 13 07:05:03.862200 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:05:03.974645 sshd[4191]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:03.982115 systemd[1]: Started sshd@16-10.0.0.23:22-10.0.0.1:57382.service - OpenSSH per-connection server daemon (10.0.0.1:57382). Aug 13 07:05:03.982773 systemd[1]: sshd@15-10.0.0.23:22-10.0.0.1:57376.service: Deactivated successfully. Aug 13 07:05:03.987187 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:05:03.988206 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:05:03.989518 systemd-logind[1555]: Removed session 16. Aug 13 07:05:04.015002 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 57382 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:04.016927 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:04.021391 systemd-logind[1555]: New session 17 of user core. Aug 13 07:05:04.035111 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:05:04.251247 sshd[4204]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:04.260098 systemd[1]: Started sshd@17-10.0.0.23:22-10.0.0.1:57386.service - OpenSSH per-connection server daemon (10.0.0.1:57386). Aug 13 07:05:04.260599 systemd[1]: sshd@16-10.0.0.23:22-10.0.0.1:57382.service: Deactivated successfully. Aug 13 07:05:04.263584 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:05:04.266314 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:05:04.267190 systemd-logind[1555]: Removed session 17. Aug 13 07:05:04.292559 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 57386 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:04.294157 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:04.298689 systemd-logind[1555]: New session 18 of user core. Aug 13 07:05:04.309117 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:05:06.802800 sshd[4217]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:06.812200 systemd[1]: Started sshd@18-10.0.0.23:22-10.0.0.1:57402.service - OpenSSH per-connection server daemon (10.0.0.1:57402). Aug 13 07:05:06.813057 systemd[1]: sshd@17-10.0.0.23:22-10.0.0.1:57386.service: Deactivated successfully. Aug 13 07:05:06.818511 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:05:06.820411 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:05:06.824260 systemd-logind[1555]: Removed session 18. Aug 13 07:05:06.846667 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 57402 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:06.848795 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:06.854261 systemd-logind[1555]: New session 19 of user core. Aug 13 07:05:06.864283 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:05:07.122930 sshd[4238]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:07.130173 systemd[1]: Started sshd@19-10.0.0.23:22-10.0.0.1:57406.service - OpenSSH per-connection server daemon (10.0.0.1:57406). Aug 13 07:05:07.130727 systemd[1]: sshd@18-10.0.0.23:22-10.0.0.1:57402.service: Deactivated successfully. Aug 13 07:05:07.136320 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:05:07.137495 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:05:07.138674 systemd-logind[1555]: Removed session 19. Aug 13 07:05:07.159978 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 57406 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:07.161813 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:07.166069 systemd-logind[1555]: New session 20 of user core. Aug 13 07:05:07.177108 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:05:07.286181 sshd[4250]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:07.290866 systemd[1]: sshd@19-10.0.0.23:22-10.0.0.1:57406.service: Deactivated successfully. Aug 13 07:05:07.293281 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:05:07.293348 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:05:07.294601 systemd-logind[1555]: Removed session 20. Aug 13 07:05:12.297169 systemd[1]: Started sshd@20-10.0.0.23:22-10.0.0.1:38024.service - OpenSSH per-connection server daemon (10.0.0.1:38024). Aug 13 07:05:12.324810 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 38024 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:12.326775 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:12.331353 systemd-logind[1555]: New session 21 of user core. Aug 13 07:05:12.347141 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:05:12.514219 sshd[4270]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:12.518715 systemd[1]: sshd@20-10.0.0.23:22-10.0.0.1:38024.service: Deactivated successfully. Aug 13 07:05:12.521468 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:05:12.521609 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:05:12.522896 systemd-logind[1555]: Removed session 21. Aug 13 07:05:16.315010 kubelet[2662]: E0813 07:05:16.314956 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:17.528161 systemd[1]: Started sshd@21-10.0.0.23:22-10.0.0.1:38026.service - OpenSSH per-connection server daemon (10.0.0.1:38026). Aug 13 07:05:17.557524 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 38026 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:17.559264 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:17.563595 systemd-logind[1555]: New session 22 of user core. Aug 13 07:05:17.573119 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:05:17.688299 sshd[4290]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:17.692686 systemd[1]: sshd@21-10.0.0.23:22-10.0.0.1:38026.service: Deactivated successfully. Aug 13 07:05:17.695556 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:05:17.695620 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:05:17.697314 systemd-logind[1555]: Removed session 22. Aug 13 07:05:20.314227 kubelet[2662]: E0813 07:05:20.314152 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:22.314037 kubelet[2662]: E0813 07:05:22.313996 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:22.704082 systemd[1]: Started sshd@22-10.0.0.23:22-10.0.0.1:54414.service - OpenSSH per-connection server daemon (10.0.0.1:54414). Aug 13 07:05:22.731496 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 54414 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:22.733164 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:22.737211 systemd-logind[1555]: New session 23 of user core. Aug 13 07:05:22.744455 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:05:22.848809 sshd[4305]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:22.852904 systemd[1]: sshd@22-10.0.0.23:22-10.0.0.1:54414.service: Deactivated successfully. Aug 13 07:05:22.855393 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:05:22.856083 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:05:22.856952 systemd-logind[1555]: Removed session 23. Aug 13 07:05:27.875148 systemd[1]: Started sshd@23-10.0.0.23:22-10.0.0.1:54420.service - OpenSSH per-connection server daemon (10.0.0.1:54420). Aug 13 07:05:27.902623 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 54420 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:27.904195 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:27.908110 systemd-logind[1555]: New session 24 of user core. Aug 13 07:05:27.917141 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:05:28.025244 sshd[4321]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:28.040139 systemd[1]: Started sshd@24-10.0.0.23:22-10.0.0.1:49210.service - OpenSSH per-connection server daemon (10.0.0.1:49210). Aug 13 07:05:28.040738 systemd[1]: sshd@23-10.0.0.23:22-10.0.0.1:54420.service: Deactivated successfully. Aug 13 07:05:28.045276 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:05:28.046404 systemd-logind[1555]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:05:28.048143 systemd-logind[1555]: Removed session 24. Aug 13 07:05:28.069464 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 49210 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:28.071038 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:28.075531 systemd-logind[1555]: New session 25 of user core. Aug 13 07:05:28.091138 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:05:29.456426 containerd[1575]: time="2025-08-13T07:05:29.456367549Z" level=info msg="StopContainer for \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\" with timeout 30 (s)" Aug 13 07:05:29.457268 containerd[1575]: time="2025-08-13T07:05:29.457073520Z" level=info msg="Stop container \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\" with signal terminated" Aug 13 07:05:29.518403 containerd[1575]: time="2025-08-13T07:05:29.518256549Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:05:29.521829 containerd[1575]: time="2025-08-13T07:05:29.521793272Z" level=info msg="StopContainer for \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\" with timeout 2 (s)" Aug 13 07:05:29.523799 containerd[1575]: time="2025-08-13T07:05:29.522273526Z" level=info msg="Stop container \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\" with signal terminated" Aug 13 07:05:29.524209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd-rootfs.mount: Deactivated successfully. Aug 13 07:05:29.529418 systemd-networkd[1248]: lxc_health: Link DOWN Aug 13 07:05:29.529424 systemd-networkd[1248]: lxc_health: Lost carrier Aug 13 07:05:29.535735 containerd[1575]: time="2025-08-13T07:05:29.535677328Z" level=info msg="shim disconnected" id=87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd namespace=k8s.io Aug 13 07:05:29.535735 containerd[1575]: time="2025-08-13T07:05:29.535730971Z" level=warning msg="cleaning up after shim disconnected" id=87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd namespace=k8s.io Aug 13 07:05:29.535838 containerd[1575]: time="2025-08-13T07:05:29.535741441Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:05:29.555984 containerd[1575]: time="2025-08-13T07:05:29.555934891Z" level=info msg="StopContainer for \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\" returns successfully" Aug 13 07:05:29.560309 containerd[1575]: time="2025-08-13T07:05:29.560274281Z" level=info msg="StopPodSandbox for \"f5a40b8b699a676391f1daf902e50c854c85f2ff289e77743a0db303ce7e10ee\"" Aug 13 07:05:29.560418 containerd[1575]: time="2025-08-13T07:05:29.560321421Z" level=info msg="Container to stop \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:05:29.565277 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5a40b8b699a676391f1daf902e50c854c85f2ff289e77743a0db303ce7e10ee-shm.mount: Deactivated successfully. Aug 13 07:05:29.576526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c-rootfs.mount: Deactivated successfully. Aug 13 07:05:29.584150 containerd[1575]: time="2025-08-13T07:05:29.584076513Z" level=info msg="shim disconnected" id=af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c namespace=k8s.io Aug 13 07:05:29.584150 containerd[1575]: time="2025-08-13T07:05:29.584144151Z" level=warning msg="cleaning up after shim disconnected" id=af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c namespace=k8s.io Aug 13 07:05:29.584150 containerd[1575]: time="2025-08-13T07:05:29.584154892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:05:29.591522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5a40b8b699a676391f1daf902e50c854c85f2ff289e77743a0db303ce7e10ee-rootfs.mount: Deactivated successfully. Aug 13 07:05:29.593904 containerd[1575]: time="2025-08-13T07:05:29.593655875Z" level=info msg="shim disconnected" id=f5a40b8b699a676391f1daf902e50c854c85f2ff289e77743a0db303ce7e10ee namespace=k8s.io Aug 13 07:05:29.593904 containerd[1575]: time="2025-08-13T07:05:29.593752158Z" level=warning msg="cleaning up after shim disconnected" id=f5a40b8b699a676391f1daf902e50c854c85f2ff289e77743a0db303ce7e10ee namespace=k8s.io Aug 13 07:05:29.593904 containerd[1575]: time="2025-08-13T07:05:29.593766575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:05:29.606301 containerd[1575]: time="2025-08-13T07:05:29.606253535Z" level=info msg="StopContainer for \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\" returns successfully" Aug 13 07:05:29.606786 containerd[1575]: time="2025-08-13T07:05:29.606752804Z" level=info msg="StopPodSandbox for \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\"" Aug 13 07:05:29.606838 containerd[1575]: time="2025-08-13T07:05:29.606787199Z" level=info msg="Container to stop \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:05:29.606838 containerd[1575]: time="2025-08-13T07:05:29.606802979Z" level=info msg="Container to stop \"4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:05:29.606838 containerd[1575]: time="2025-08-13T07:05:29.606815253Z" level=info msg="Container to stop \"d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:05:29.606838 containerd[1575]: time="2025-08-13T07:05:29.606826865Z" level=info msg="Container to stop \"a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:05:29.606967 containerd[1575]: time="2025-08-13T07:05:29.606838477Z" level=info msg="Container to stop \"1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:05:29.610963 containerd[1575]: time="2025-08-13T07:05:29.610902884Z" level=info msg="TearDown network for sandbox \"f5a40b8b699a676391f1daf902e50c854c85f2ff289e77743a0db303ce7e10ee\" successfully" Aug 13 07:05:29.610963 containerd[1575]: time="2025-08-13T07:05:29.610945535Z" level=info msg="StopPodSandbox for \"f5a40b8b699a676391f1daf902e50c854c85f2ff289e77743a0db303ce7e10ee\" returns successfully" Aug 13 07:05:29.645005 containerd[1575]: time="2025-08-13T07:05:29.644918954Z" level=info msg="shim disconnected" id=056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1 namespace=k8s.io Aug 13 07:05:29.645005 containerd[1575]: time="2025-08-13T07:05:29.644988545Z" level=warning msg="cleaning up after shim disconnected" id=056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1 namespace=k8s.io Aug 13 07:05:29.645005 containerd[1575]: time="2025-08-13T07:05:29.645001100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:05:29.658837 containerd[1575]: time="2025-08-13T07:05:29.658785015Z" level=info msg="TearDown network for sandbox \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\" successfully" Aug 13 07:05:29.658837 containerd[1575]: time="2025-08-13T07:05:29.658824460Z" level=info msg="StopPodSandbox for \"056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1\" returns successfully" Aug 13 07:05:29.757159 kubelet[2662]: I0813 07:05:29.756983 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-host-proc-sys-kernel\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.757159 kubelet[2662]: I0813 07:05:29.757038 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-lib-modules\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.757159 kubelet[2662]: I0813 07:05:29.757064 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a21f3382-b073-4f49-8fe5-7677d85787f4-hubble-tls\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.757159 kubelet[2662]: I0813 07:05:29.757080 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a21f3382-b073-4f49-8fe5-7677d85787f4-clustermesh-secrets\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.757159 kubelet[2662]: I0813 07:05:29.757096 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-bpf-maps\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.757159 kubelet[2662]: I0813 07:05:29.757108 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-hostproc\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.757789 kubelet[2662]: I0813 07:05:29.757128 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7aeb2336-7c3a-4ae6-8767-81d84e19de2d-cilium-config-path\") pod \"7aeb2336-7c3a-4ae6-8767-81d84e19de2d\" (UID: \"7aeb2336-7c3a-4ae6-8767-81d84e19de2d\") " Aug 13 07:05:29.757789 kubelet[2662]: I0813 07:05:29.757142 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-cilium-run\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.757789 kubelet[2662]: I0813 07:05:29.757155 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a21f3382-b073-4f49-8fe5-7677d85787f4-cilium-config-path\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.757789 kubelet[2662]: I0813 07:05:29.757171 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-cni-path\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.757789 kubelet[2662]: I0813 07:05:29.757184 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-xtables-lock\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.757789 kubelet[2662]: I0813 07:05:29.757164 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:05:29.757977 kubelet[2662]: I0813 07:05:29.757240 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:05:29.757977 kubelet[2662]: I0813 07:05:29.757198 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-cilium-cgroup\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.757977 kubelet[2662]: I0813 07:05:29.757164 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:05:29.757977 kubelet[2662]: I0813 07:05:29.757278 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:05:29.757977 kubelet[2662]: I0813 07:05:29.757302 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rnfj\" (UniqueName: \"kubernetes.io/projected/7aeb2336-7c3a-4ae6-8767-81d84e19de2d-kube-api-access-5rnfj\") pod \"7aeb2336-7c3a-4ae6-8767-81d84e19de2d\" (UID: \"7aeb2336-7c3a-4ae6-8767-81d84e19de2d\") " Aug 13 07:05:29.758096 kubelet[2662]: I0813 07:05:29.757334 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqp9r\" (UniqueName: \"kubernetes.io/projected/a21f3382-b073-4f49-8fe5-7677d85787f4-kube-api-access-tqp9r\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.758096 kubelet[2662]: I0813 07:05:29.757358 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-host-proc-sys-net\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.758096 kubelet[2662]: I0813 07:05:29.757381 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-etc-cni-netd\") pod \"a21f3382-b073-4f49-8fe5-7677d85787f4\" (UID: \"a21f3382-b073-4f49-8fe5-7677d85787f4\") " Aug 13 07:05:29.758096 kubelet[2662]: I0813 07:05:29.757439 2662 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.758096 kubelet[2662]: I0813 07:05:29.757455 2662 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.758096 kubelet[2662]: I0813 07:05:29.757468 2662 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.758234 kubelet[2662]: I0813 07:05:29.757496 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:05:29.761384 kubelet[2662]: I0813 07:05:29.761347 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a21f3382-b073-4f49-8fe5-7677d85787f4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:05:29.761639 kubelet[2662]: I0813 07:05:29.761571 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-cni-path" (OuterVolumeSpecName: "cni-path") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:05:29.761639 kubelet[2662]: I0813 07:05:29.761595 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:05:29.762047 kubelet[2662]: I0813 07:05:29.761937 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:05:29.762047 kubelet[2662]: I0813 07:05:29.761991 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:05:29.762047 kubelet[2662]: I0813 07:05:29.762017 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-hostproc" (OuterVolumeSpecName: "hostproc") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:05:29.762688 kubelet[2662]: I0813 07:05:29.762636 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aeb2336-7c3a-4ae6-8767-81d84e19de2d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7aeb2336-7c3a-4ae6-8767-81d84e19de2d" (UID: "7aeb2336-7c3a-4ae6-8767-81d84e19de2d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:05:29.763047 kubelet[2662]: I0813 07:05:29.763001 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a21f3382-b073-4f49-8fe5-7677d85787f4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:05:29.763925 kubelet[2662]: I0813 07:05:29.763880 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a21f3382-b073-4f49-8fe5-7677d85787f4-kube-api-access-tqp9r" (OuterVolumeSpecName: "kube-api-access-tqp9r") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "kube-api-access-tqp9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:05:29.764738 kubelet[2662]: I0813 07:05:29.764693 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aeb2336-7c3a-4ae6-8767-81d84e19de2d-kube-api-access-5rnfj" (OuterVolumeSpecName: "kube-api-access-5rnfj") pod "7aeb2336-7c3a-4ae6-8767-81d84e19de2d" (UID: "7aeb2336-7c3a-4ae6-8767-81d84e19de2d"). InnerVolumeSpecName "kube-api-access-5rnfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:05:29.765434 kubelet[2662]: I0813 07:05:29.765388 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a21f3382-b073-4f49-8fe5-7677d85787f4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a21f3382-b073-4f49-8fe5-7677d85787f4" (UID: "a21f3382-b073-4f49-8fe5-7677d85787f4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:05:29.858203 kubelet[2662]: I0813 07:05:29.858137 2662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqp9r\" (UniqueName: \"kubernetes.io/projected/a21f3382-b073-4f49-8fe5-7677d85787f4-kube-api-access-tqp9r\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.858203 kubelet[2662]: I0813 07:05:29.858190 2662 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.858203 kubelet[2662]: I0813 07:05:29.858204 2662 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.858203 kubelet[2662]: I0813 07:05:29.858215 2662 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a21f3382-b073-4f49-8fe5-7677d85787f4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.858457 kubelet[2662]: I0813 07:05:29.858228 2662 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a21f3382-b073-4f49-8fe5-7677d85787f4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.858457 kubelet[2662]: I0813 07:05:29.858239 2662 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.858457 kubelet[2662]: I0813 07:05:29.858250 2662 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.858457 kubelet[2662]: I0813 07:05:29.858261 2662 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7aeb2336-7c3a-4ae6-8767-81d84e19de2d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.858457 kubelet[2662]: I0813 07:05:29.858272 2662 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.858457 kubelet[2662]: I0813 07:05:29.858284 2662 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a21f3382-b073-4f49-8fe5-7677d85787f4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.858457 kubelet[2662]: I0813 07:05:29.858319 2662 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.858457 kubelet[2662]: I0813 07:05:29.858334 2662 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a21f3382-b073-4f49-8fe5-7677d85787f4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:29.858654 kubelet[2662]: I0813 07:05:29.858366 2662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rnfj\" (UniqueName: \"kubernetes.io/projected/7aeb2336-7c3a-4ae6-8767-81d84e19de2d-kube-api-access-5rnfj\") on node \"localhost\" DevicePath \"\"" Aug 13 07:05:30.494701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1-rootfs.mount: Deactivated successfully. Aug 13 07:05:30.494927 systemd[1]: var-lib-kubelet-pods-7aeb2336\x2d7c3a\x2d4ae6\x2d8767\x2d81d84e19de2d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5rnfj.mount: Deactivated successfully. Aug 13 07:05:30.495074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-056bd671f1bc43a9e604d7a5f82ff2ee376e25fdb5d95590f78c13682523b9e1-shm.mount: Deactivated successfully. Aug 13 07:05:30.495220 systemd[1]: var-lib-kubelet-pods-a21f3382\x2db073\x2d4f49\x2d8fe5\x2d7677d85787f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtqp9r.mount: Deactivated successfully. Aug 13 07:05:30.495366 systemd[1]: var-lib-kubelet-pods-a21f3382\x2db073\x2d4f49\x2d8fe5\x2d7677d85787f4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 07:05:30.495517 systemd[1]: var-lib-kubelet-pods-a21f3382\x2db073\x2d4f49\x2d8fe5\x2d7677d85787f4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 07:05:30.524649 kubelet[2662]: I0813 07:05:30.524615 2662 scope.go:117] "RemoveContainer" containerID="af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c" Aug 13 07:05:30.525835 containerd[1575]: time="2025-08-13T07:05:30.525793717Z" level=info msg="RemoveContainer for \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\"" Aug 13 07:05:30.539411 containerd[1575]: time="2025-08-13T07:05:30.539354211Z" level=info msg="RemoveContainer for \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\" returns successfully" Aug 13 07:05:30.539716 kubelet[2662]: I0813 07:05:30.539673 2662 scope.go:117] "RemoveContainer" containerID="1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e" Aug 13 07:05:30.540594 containerd[1575]: time="2025-08-13T07:05:30.540556946Z" level=info msg="RemoveContainer for \"1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e\"" Aug 13 07:05:30.544753 containerd[1575]: time="2025-08-13T07:05:30.544689491Z" level=info msg="RemoveContainer for \"1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e\" returns successfully" Aug 13 07:05:30.545024 kubelet[2662]: I0813 07:05:30.544943 2662 scope.go:117] "RemoveContainer" containerID="a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7" Aug 13 07:05:30.546508 containerd[1575]: time="2025-08-13T07:05:30.546345128Z" level=info msg="RemoveContainer for \"a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7\"" Aug 13 07:05:30.550331 containerd[1575]: time="2025-08-13T07:05:30.550277092Z" level=info msg="RemoveContainer for \"a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7\" returns successfully" Aug 13 07:05:30.550486 kubelet[2662]: I0813 07:05:30.550453 2662 scope.go:117] "RemoveContainer" containerID="d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd" Aug 13 07:05:30.552118 containerd[1575]: time="2025-08-13T07:05:30.551925284Z" level=info msg="RemoveContainer for \"d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd\"" Aug 13 07:05:30.555386 containerd[1575]: time="2025-08-13T07:05:30.555362497Z" level=info msg="RemoveContainer for \"d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd\" returns successfully" Aug 13 07:05:30.555503 kubelet[2662]: I0813 07:05:30.555486 2662 scope.go:117] "RemoveContainer" containerID="4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361" Aug 13 07:05:30.556530 containerd[1575]: time="2025-08-13T07:05:30.556509206Z" level=info msg="RemoveContainer for \"4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361\"" Aug 13 07:05:30.559550 containerd[1575]: time="2025-08-13T07:05:30.559523134Z" level=info msg="RemoveContainer for \"4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361\" returns successfully" Aug 13 07:05:30.559710 kubelet[2662]: I0813 07:05:30.559689 2662 scope.go:117] "RemoveContainer" containerID="af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c" Aug 13 07:05:30.559893 containerd[1575]: time="2025-08-13T07:05:30.559861237Z" level=error msg="ContainerStatus for \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\": not found" Aug 13 07:05:30.570204 kubelet[2662]: E0813 07:05:30.570175 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\": not found" containerID="af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c" Aug 13 07:05:30.570297 kubelet[2662]: I0813 07:05:30.570216 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c"} err="failed to get container status \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\": rpc error: code = NotFound desc = an error occurred when try to find container \"af0dc31532a2df9d3cada46d18fd58fe186ff02627065d6da97f414de9d1891c\": not found" Aug 13 07:05:30.570337 kubelet[2662]: I0813 07:05:30.570299 2662 scope.go:117] "RemoveContainer" containerID="1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e" Aug 13 07:05:30.570571 containerd[1575]: time="2025-08-13T07:05:30.570523452Z" level=error msg="ContainerStatus for \"1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e\": not found" Aug 13 07:05:30.570640 kubelet[2662]: E0813 07:05:30.570622 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e\": not found" containerID="1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e" Aug 13 07:05:30.570671 kubelet[2662]: I0813 07:05:30.570645 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e"} err="failed to get container status \"1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1cc5f2dfb4bb0ba55f5b7fb04527e6c8bd4fae1e5f1bfe814e5f97b34d658b0e\": not found" Aug 13 07:05:30.570671 kubelet[2662]: I0813 07:05:30.570661 2662 scope.go:117] "RemoveContainer" containerID="a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7" Aug 13 07:05:30.570824 containerd[1575]: time="2025-08-13T07:05:30.570800629Z" level=error msg="ContainerStatus for \"a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7\": not found" Aug 13 07:05:30.570953 kubelet[2662]: E0813 07:05:30.570932 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7\": not found" containerID="a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7" Aug 13 07:05:30.570994 kubelet[2662]: I0813 07:05:30.570952 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7"} err="failed to get container status \"a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"a20b968f8bc93d003a5b4c4501ddc5af80274194e64cdeb4132c5545e78053b7\": not found" Aug 13 07:05:30.570994 kubelet[2662]: I0813 07:05:30.570967 2662 scope.go:117] "RemoveContainer" containerID="d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd" Aug 13 07:05:30.580156 containerd[1575]: time="2025-08-13T07:05:30.580113590Z" level=error msg="ContainerStatus for \"d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd\": not found" Aug 13 07:05:30.580279 kubelet[2662]: E0813 07:05:30.580252 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd\": not found" containerID="d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd" Aug 13 07:05:30.580279 kubelet[2662]: I0813 07:05:30.580273 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd"} err="failed to get container status \"d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3eecb0475d033e674238c03dbeb29fcefc783247af1bfc247cb69d9547f53fd\": not found" Aug 13 07:05:30.580372 kubelet[2662]: I0813 07:05:30.580287 2662 scope.go:117] "RemoveContainer" containerID="4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361" Aug 13 07:05:30.580621 containerd[1575]: time="2025-08-13T07:05:30.580555249Z" level=error msg="ContainerStatus for \"4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361\": not found" Aug 13 07:05:30.580723 kubelet[2662]: E0813 07:05:30.580702 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361\": not found" containerID="4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361" Aug 13 07:05:30.580756 kubelet[2662]: I0813 07:05:30.580723 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361"} err="failed to get container status \"4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361\": rpc error: code = NotFound desc = an error occurred when try to find container \"4042ad5066888ba176b40046126bdb0a845f7195b13158ab02dfabc7f384b361\": not found" Aug 13 07:05:30.580756 kubelet[2662]: I0813 07:05:30.580736 2662 scope.go:117] "RemoveContainer" containerID="87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd" Aug 13 07:05:30.582064 containerd[1575]: time="2025-08-13T07:05:30.582039791Z" level=info msg="RemoveContainer for \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\"" Aug 13 07:05:30.585743 containerd[1575]: time="2025-08-13T07:05:30.585711410Z" level=info msg="RemoveContainer for \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\" returns successfully" Aug 13 07:05:30.585951 kubelet[2662]: I0813 07:05:30.585906 2662 scope.go:117] "RemoveContainer" containerID="87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd" Aug 13 07:05:30.586150 containerd[1575]: time="2025-08-13T07:05:30.586089899Z" level=error msg="ContainerStatus for \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\": not found" Aug 13 07:05:30.586236 kubelet[2662]: E0813 07:05:30.586212 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\": not found" containerID="87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd" Aug 13 07:05:30.586278 kubelet[2662]: I0813 07:05:30.586242 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd"} err="failed to get container status \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"87843ae7131f083d19c7ba0feed65bc61e9b5cc1c00c224ed6196613c87e08cd\": not found" Aug 13 07:05:31.316816 kubelet[2662]: I0813 07:05:31.316754 2662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aeb2336-7c3a-4ae6-8767-81d84e19de2d" path="/var/lib/kubelet/pods/7aeb2336-7c3a-4ae6-8767-81d84e19de2d/volumes" Aug 13 07:05:31.317486 kubelet[2662]: I0813 07:05:31.317460 2662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a21f3382-b073-4f49-8fe5-7677d85787f4" path="/var/lib/kubelet/pods/a21f3382-b073-4f49-8fe5-7677d85787f4/volumes" Aug 13 07:05:31.417964 sshd[4334]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:31.427163 systemd[1]: Started sshd@25-10.0.0.23:22-10.0.0.1:49224.service - OpenSSH per-connection server daemon (10.0.0.1:49224). Aug 13 07:05:31.427963 systemd[1]: sshd@24-10.0.0.23:22-10.0.0.1:49210.service: Deactivated successfully. Aug 13 07:05:31.431004 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:05:31.432766 systemd-logind[1555]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:05:31.434189 systemd-logind[1555]: Removed session 25. Aug 13 07:05:31.458348 sshd[4503]: Accepted publickey for core from 10.0.0.1 port 49224 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:31.460034 sshd[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:31.463859 systemd-logind[1555]: New session 26 of user core. Aug 13 07:05:31.481114 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 07:05:31.976529 sshd[4503]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:31.988380 systemd[1]: Started sshd@26-10.0.0.23:22-10.0.0.1:49226.service - OpenSSH per-connection server daemon (10.0.0.1:49226). Aug 13 07:05:31.991235 kubelet[2662]: E0813 07:05:31.989295 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a21f3382-b073-4f49-8fe5-7677d85787f4" containerName="mount-cgroup" Aug 13 07:05:31.991235 kubelet[2662]: E0813 07:05:31.989327 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a21f3382-b073-4f49-8fe5-7677d85787f4" containerName="apply-sysctl-overwrites" Aug 13 07:05:31.991235 kubelet[2662]: E0813 07:05:31.989337 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a21f3382-b073-4f49-8fe5-7677d85787f4" containerName="mount-bpf-fs" Aug 13 07:05:31.991235 kubelet[2662]: E0813 07:05:31.989346 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7aeb2336-7c3a-4ae6-8767-81d84e19de2d" containerName="cilium-operator" Aug 13 07:05:31.991235 kubelet[2662]: E0813 07:05:31.989358 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a21f3382-b073-4f49-8fe5-7677d85787f4" containerName="clean-cilium-state" Aug 13 07:05:31.991235 kubelet[2662]: E0813 07:05:31.989366 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a21f3382-b073-4f49-8fe5-7677d85787f4" containerName="cilium-agent" Aug 13 07:05:31.991235 kubelet[2662]: I0813 07:05:31.989410 2662 memory_manager.go:354] "RemoveStaleState removing state" podUID="a21f3382-b073-4f49-8fe5-7677d85787f4" containerName="cilium-agent" Aug 13 07:05:31.991235 kubelet[2662]: I0813 07:05:31.989423 2662 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aeb2336-7c3a-4ae6-8767-81d84e19de2d" containerName="cilium-operator" Aug 13 07:05:31.989777 systemd[1]: sshd@25-10.0.0.23:22-10.0.0.1:49224.service: Deactivated successfully. Aug 13 07:05:31.997767 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 07:05:32.003703 systemd-logind[1555]: Session 26 logged out. Waiting for processes to exit. Aug 13 07:05:32.008645 systemd-logind[1555]: Removed session 26. Aug 13 07:05:32.033288 sshd[4517]: Accepted publickey for core from 10.0.0.1 port 49226 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:32.035118 sshd[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:32.039755 systemd-logind[1555]: New session 27 of user core. Aug 13 07:05:32.053146 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 07:05:32.073380 kubelet[2662]: I0813 07:05:32.073331 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/700044ef-b15e-4b90-b232-dafa70f93a59-cilium-run\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073488 kubelet[2662]: I0813 07:05:32.073384 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nclsj\" (UniqueName: \"kubernetes.io/projected/700044ef-b15e-4b90-b232-dafa70f93a59-kube-api-access-nclsj\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073488 kubelet[2662]: I0813 07:05:32.073411 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/700044ef-b15e-4b90-b232-dafa70f93a59-bpf-maps\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073488 kubelet[2662]: I0813 07:05:32.073432 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/700044ef-b15e-4b90-b232-dafa70f93a59-cilium-cgroup\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073488 kubelet[2662]: I0813 07:05:32.073453 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/700044ef-b15e-4b90-b232-dafa70f93a59-xtables-lock\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073488 kubelet[2662]: I0813 07:05:32.073472 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/700044ef-b15e-4b90-b232-dafa70f93a59-cilium-ipsec-secrets\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073686 kubelet[2662]: I0813 07:05:32.073492 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/700044ef-b15e-4b90-b232-dafa70f93a59-host-proc-sys-net\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073686 kubelet[2662]: I0813 07:05:32.073516 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/700044ef-b15e-4b90-b232-dafa70f93a59-cni-path\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073686 kubelet[2662]: I0813 07:05:32.073536 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/700044ef-b15e-4b90-b232-dafa70f93a59-hostproc\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073686 kubelet[2662]: I0813 07:05:32.073555 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/700044ef-b15e-4b90-b232-dafa70f93a59-clustermesh-secrets\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073686 kubelet[2662]: I0813 07:05:32.073577 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/700044ef-b15e-4b90-b232-dafa70f93a59-cilium-config-path\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073686 kubelet[2662]: I0813 07:05:32.073612 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/700044ef-b15e-4b90-b232-dafa70f93a59-host-proc-sys-kernel\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073884 kubelet[2662]: I0813 07:05:32.073643 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/700044ef-b15e-4b90-b232-dafa70f93a59-hubble-tls\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073884 kubelet[2662]: I0813 07:05:32.073675 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/700044ef-b15e-4b90-b232-dafa70f93a59-etc-cni-netd\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.073884 kubelet[2662]: I0813 07:05:32.073693 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/700044ef-b15e-4b90-b232-dafa70f93a59-lib-modules\") pod \"cilium-866gg\" (UID: \"700044ef-b15e-4b90-b232-dafa70f93a59\") " pod="kube-system/cilium-866gg" Aug 13 07:05:32.103908 sshd[4517]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:32.112134 systemd[1]: Started sshd@27-10.0.0.23:22-10.0.0.1:49240.service - OpenSSH per-connection server daemon (10.0.0.1:49240). Aug 13 07:05:32.112633 systemd[1]: sshd@26-10.0.0.23:22-10.0.0.1:49226.service: Deactivated successfully. Aug 13 07:05:32.116414 systemd-logind[1555]: Session 27 logged out. Waiting for processes to exit. Aug 13 07:05:32.117160 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 07:05:32.118147 systemd-logind[1555]: Removed session 27. Aug 13 07:05:32.142665 sshd[4526]: Accepted publickey for core from 10.0.0.1 port 49240 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:05:32.144398 sshd[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:05:32.148714 systemd-logind[1555]: New session 28 of user core. Aug 13 07:05:32.157147 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 07:05:32.303137 kubelet[2662]: E0813 07:05:32.302995 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:32.304002 containerd[1575]: time="2025-08-13T07:05:32.303956532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-866gg,Uid:700044ef-b15e-4b90-b232-dafa70f93a59,Namespace:kube-system,Attempt:0,}" Aug 13 07:05:32.332930 containerd[1575]: time="2025-08-13T07:05:32.332745772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:05:32.332930 containerd[1575]: time="2025-08-13T07:05:32.332820875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:05:32.332930 containerd[1575]: time="2025-08-13T07:05:32.332891028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:05:32.333265 containerd[1575]: time="2025-08-13T07:05:32.333034200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:05:32.374075 kubelet[2662]: E0813 07:05:32.373987 2662 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 07:05:32.375502 containerd[1575]: time="2025-08-13T07:05:32.375451329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-866gg,Uid:700044ef-b15e-4b90-b232-dafa70f93a59,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c28f6caa970468090ecfc1d3a070e553d97d2182617bb93e74ba78846067054\"" Aug 13 07:05:32.376587 kubelet[2662]: E0813 07:05:32.376555 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:32.378770 containerd[1575]: time="2025-08-13T07:05:32.378716081Z" level=info msg="CreateContainer within sandbox \"1c28f6caa970468090ecfc1d3a070e553d97d2182617bb93e74ba78846067054\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:05:32.393117 containerd[1575]: time="2025-08-13T07:05:32.393052565Z" level=info msg="CreateContainer within sandbox \"1c28f6caa970468090ecfc1d3a070e553d97d2182617bb93e74ba78846067054\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f974e00916edca0e155e125df5241919c78b6e833e58419b39c9888823728ac4\"" Aug 13 07:05:32.393598 containerd[1575]: time="2025-08-13T07:05:32.393573224Z" level=info msg="StartContainer for \"f974e00916edca0e155e125df5241919c78b6e833e58419b39c9888823728ac4\"" Aug 13 07:05:32.452162 containerd[1575]: time="2025-08-13T07:05:32.452115426Z" level=info msg="StartContainer for \"f974e00916edca0e155e125df5241919c78b6e833e58419b39c9888823728ac4\" returns successfully" Aug 13 07:05:32.498559 containerd[1575]: time="2025-08-13T07:05:32.498471277Z" level=info msg="shim disconnected" id=f974e00916edca0e155e125df5241919c78b6e833e58419b39c9888823728ac4 namespace=k8s.io Aug 13 07:05:32.498559 containerd[1575]: time="2025-08-13T07:05:32.498541741Z" level=warning msg="cleaning up after shim disconnected" id=f974e00916edca0e155e125df5241919c78b6e833e58419b39c9888823728ac4 namespace=k8s.io Aug 13 07:05:32.498559 containerd[1575]: time="2025-08-13T07:05:32.498553974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:05:32.534196 kubelet[2662]: E0813 07:05:32.534142 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:32.537066 containerd[1575]: time="2025-08-13T07:05:32.537015969Z" level=info msg="CreateContainer within sandbox \"1c28f6caa970468090ecfc1d3a070e553d97d2182617bb93e74ba78846067054\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:05:32.551078 containerd[1575]: time="2025-08-13T07:05:32.550932376Z" level=info msg="CreateContainer within sandbox \"1c28f6caa970468090ecfc1d3a070e553d97d2182617bb93e74ba78846067054\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"70fcc1ee99c8d6e0479b74f3b4d6da51fb75fc87c75b984fed314dbc67c70ec0\"" Aug 13 07:05:32.551588 containerd[1575]: time="2025-08-13T07:05:32.551501787Z" level=info msg="StartContainer for \"70fcc1ee99c8d6e0479b74f3b4d6da51fb75fc87c75b984fed314dbc67c70ec0\"" Aug 13 07:05:32.619569 containerd[1575]: time="2025-08-13T07:05:32.619507998Z" level=info msg="StartContainer for \"70fcc1ee99c8d6e0479b74f3b4d6da51fb75fc87c75b984fed314dbc67c70ec0\" returns successfully" Aug 13 07:05:32.651983 containerd[1575]: time="2025-08-13T07:05:32.651907697Z" level=info msg="shim disconnected" id=70fcc1ee99c8d6e0479b74f3b4d6da51fb75fc87c75b984fed314dbc67c70ec0 namespace=k8s.io Aug 13 07:05:32.651983 containerd[1575]: time="2025-08-13T07:05:32.651973351Z" level=warning msg="cleaning up after shim disconnected" id=70fcc1ee99c8d6e0479b74f3b4d6da51fb75fc87c75b984fed314dbc67c70ec0 namespace=k8s.io Aug 13 07:05:32.651983 containerd[1575]: time="2025-08-13T07:05:32.651982118Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:05:33.537431 kubelet[2662]: E0813 07:05:33.537380 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:33.539551 containerd[1575]: time="2025-08-13T07:05:33.539486882Z" level=info msg="CreateContainer within sandbox \"1c28f6caa970468090ecfc1d3a070e553d97d2182617bb93e74ba78846067054\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:05:33.630221 containerd[1575]: time="2025-08-13T07:05:33.630148828Z" level=info msg="CreateContainer within sandbox \"1c28f6caa970468090ecfc1d3a070e553d97d2182617bb93e74ba78846067054\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f3d522c6b46194c2dd4ec651c3663bec2acc5dd6347f03e04335aeeddac54b2d\"" Aug 13 07:05:33.630802 containerd[1575]: time="2025-08-13T07:05:33.630773554Z" level=info msg="StartContainer for \"f3d522c6b46194c2dd4ec651c3663bec2acc5dd6347f03e04335aeeddac54b2d\"" Aug 13 07:05:33.698506 containerd[1575]: time="2025-08-13T07:05:33.698457391Z" level=info msg="StartContainer for \"f3d522c6b46194c2dd4ec651c3663bec2acc5dd6347f03e04335aeeddac54b2d\" returns successfully" Aug 13 07:05:33.727252 containerd[1575]: time="2025-08-13T07:05:33.727172608Z" level=info msg="shim disconnected" id=f3d522c6b46194c2dd4ec651c3663bec2acc5dd6347f03e04335aeeddac54b2d namespace=k8s.io Aug 13 07:05:33.727252 containerd[1575]: time="2025-08-13T07:05:33.727238041Z" level=warning msg="cleaning up after shim disconnected" id=f3d522c6b46194c2dd4ec651c3663bec2acc5dd6347f03e04335aeeddac54b2d namespace=k8s.io Aug 13 07:05:33.727252 containerd[1575]: time="2025-08-13T07:05:33.727247269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:05:34.180121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3d522c6b46194c2dd4ec651c3663bec2acc5dd6347f03e04335aeeddac54b2d-rootfs.mount: Deactivated successfully. Aug 13 07:05:34.541059 kubelet[2662]: E0813 07:05:34.540922 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:34.543452 containerd[1575]: time="2025-08-13T07:05:34.543411909Z" level=info msg="CreateContainer within sandbox \"1c28f6caa970468090ecfc1d3a070e553d97d2182617bb93e74ba78846067054\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:05:34.780795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3549490191.mount: Deactivated successfully. Aug 13 07:05:34.789112 containerd[1575]: time="2025-08-13T07:05:34.789054226Z" level=info msg="CreateContainer within sandbox \"1c28f6caa970468090ecfc1d3a070e553d97d2182617bb93e74ba78846067054\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4ded8672b427ac92bc7ff1f4634cb4aae9147ecbe527cc6fc50dbb1394f282db\"" Aug 13 07:05:34.789817 containerd[1575]: time="2025-08-13T07:05:34.789689733Z" level=info msg="StartContainer for \"4ded8672b427ac92bc7ff1f4634cb4aae9147ecbe527cc6fc50dbb1394f282db\"" Aug 13 07:05:34.849196 containerd[1575]: time="2025-08-13T07:05:34.849141889Z" level=info msg="StartContainer for \"4ded8672b427ac92bc7ff1f4634cb4aae9147ecbe527cc6fc50dbb1394f282db\" returns successfully" Aug 13 07:05:34.878169 containerd[1575]: time="2025-08-13T07:05:34.878066220Z" level=info msg="shim disconnected" id=4ded8672b427ac92bc7ff1f4634cb4aae9147ecbe527cc6fc50dbb1394f282db namespace=k8s.io Aug 13 07:05:34.878169 containerd[1575]: time="2025-08-13T07:05:34.878154968Z" level=warning msg="cleaning up after shim disconnected" id=4ded8672b427ac92bc7ff1f4634cb4aae9147ecbe527cc6fc50dbb1394f282db namespace=k8s.io Aug 13 07:05:34.878169 containerd[1575]: time="2025-08-13T07:05:34.878167972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:05:35.180031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ded8672b427ac92bc7ff1f4634cb4aae9147ecbe527cc6fc50dbb1394f282db-rootfs.mount: Deactivated successfully. Aug 13 07:05:35.544644 kubelet[2662]: E0813 07:05:35.544313 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:35.546252 containerd[1575]: time="2025-08-13T07:05:35.546212960Z" level=info msg="CreateContainer within sandbox \"1c28f6caa970468090ecfc1d3a070e553d97d2182617bb93e74ba78846067054\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:05:35.606747 containerd[1575]: time="2025-08-13T07:05:35.606691369Z" level=info msg="CreateContainer within sandbox \"1c28f6caa970468090ecfc1d3a070e553d97d2182617bb93e74ba78846067054\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"07c35768fb6261362cc2dca434d49ffec7f715731f0b1d90ed2100b1bf389886\"" Aug 13 07:05:35.607277 containerd[1575]: time="2025-08-13T07:05:35.607235833Z" level=info msg="StartContainer for \"07c35768fb6261362cc2dca434d49ffec7f715731f0b1d90ed2100b1bf389886\"" Aug 13 07:05:35.663879 containerd[1575]: time="2025-08-13T07:05:35.663800962Z" level=info msg="StartContainer for \"07c35768fb6261362cc2dca434d49ffec7f715731f0b1d90ed2100b1bf389886\" returns successfully" Aug 13 07:05:36.137901 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 07:05:36.549963 kubelet[2662]: E0813 07:05:36.549749 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:36.563788 kubelet[2662]: I0813 07:05:36.563700 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-866gg" podStartSLOduration=5.563663746 podStartE2EDuration="5.563663746s" podCreationTimestamp="2025-08-13 07:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:05:36.562897081 +0000 UTC m=+89.498370628" watchObservedRunningTime="2025-08-13 07:05:36.563663746 +0000 UTC m=+89.499137303" Aug 13 07:05:38.303833 kubelet[2662]: E0813 07:05:38.303790 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:39.423599 systemd-networkd[1248]: lxc_health: Link UP Aug 13 07:05:39.431503 systemd-networkd[1248]: lxc_health: Gained carrier Aug 13 07:05:40.305412 kubelet[2662]: E0813 07:05:40.305353 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:40.560247 kubelet[2662]: E0813 07:05:40.560172 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:40.868117 systemd-networkd[1248]: lxc_health: Gained IPv6LL Aug 13 07:05:41.561750 kubelet[2662]: E0813 07:05:41.561674 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:05:45.329152 sshd[4526]: pam_unix(sshd:session): session closed for user core Aug 13 07:05:45.334978 systemd[1]: sshd@27-10.0.0.23:22-10.0.0.1:49240.service: Deactivated successfully. Aug 13 07:05:45.338522 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 07:05:45.339278 systemd-logind[1555]: Session 28 logged out. Waiting for processes to exit. Aug 13 07:05:45.340343 systemd-logind[1555]: Removed session 28.