Jul 11 05:22:36.827708 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jul 11 03:36:05 -00 2025 Jul 11 05:22:36.827754 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 05:22:36.827764 kernel: BIOS-provided physical RAM map: Jul 11 05:22:36.827771 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 11 05:22:36.827777 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 11 05:22:36.827784 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 11 05:22:36.827792 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 11 05:22:36.827801 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 11 05:22:36.827810 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 11 05:22:36.827817 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 11 05:22:36.827824 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 05:22:36.827830 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 11 05:22:36.827837 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 05:22:36.827850 kernel: NX (Execute Disable) protection: active Jul 11 05:22:36.827861 kernel: APIC: Static calls initialized Jul 11 05:22:36.827868 kernel: SMBIOS 2.8 present. Jul 11 05:22:36.827890 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 11 05:22:36.827897 kernel: DMI: Memory slots populated: 1/1 Jul 11 05:22:36.827904 kernel: Hypervisor detected: KVM Jul 11 05:22:36.827912 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 05:22:36.827919 kernel: kvm-clock: using sched offset of 4524650524 cycles Jul 11 05:22:36.827926 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 05:22:36.827934 kernel: tsc: Detected 2794.748 MHz processor Jul 11 05:22:36.827944 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 05:22:36.827952 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 05:22:36.827965 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 11 05:22:36.827973 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 11 05:22:36.827980 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 05:22:36.827988 kernel: Using GB pages for direct mapping Jul 11 05:22:36.827995 kernel: ACPI: Early table checksum verification disabled Jul 11 05:22:36.828002 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 11 05:22:36.828010 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:22:36.828021 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:22:36.828028 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:22:36.828035 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 11 05:22:36.828043 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:22:36.828050 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:22:36.828058 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:22:36.828065 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:22:36.828073 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 11 05:22:36.828085 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 11 05:22:36.828093 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 11 05:22:36.828101 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 11 05:22:36.828108 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 11 05:22:36.828116 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 11 05:22:36.828123 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 11 05:22:36.828133 kernel: No NUMA configuration found Jul 11 05:22:36.828141 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 11 05:22:36.828148 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 11 05:22:36.828156 kernel: Zone ranges: Jul 11 05:22:36.828164 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 05:22:36.828171 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 11 05:22:36.828179 kernel: Normal empty Jul 11 05:22:36.828186 kernel: Device empty Jul 11 05:22:36.828194 kernel: Movable zone start for each node Jul 11 05:22:36.828204 kernel: Early memory node ranges Jul 11 05:22:36.828212 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 11 05:22:36.828219 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 11 05:22:36.828227 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 11 05:22:36.828234 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 05:22:36.828242 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 11 05:22:36.828249 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 11 05:22:36.828257 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 05:22:36.828268 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 05:22:36.828275 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 05:22:36.828285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 05:22:36.828293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 05:22:36.828303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 05:22:36.828310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 05:22:36.828318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 05:22:36.828325 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 05:22:36.828333 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 05:22:36.828340 kernel: TSC deadline timer available Jul 11 05:22:36.828348 kernel: CPU topo: Max. logical packages: 1 Jul 11 05:22:36.828358 kernel: CPU topo: Max. logical dies: 1 Jul 11 05:22:36.828365 kernel: CPU topo: Max. dies per package: 1 Jul 11 05:22:36.828373 kernel: CPU topo: Max. threads per core: 1 Jul 11 05:22:36.828380 kernel: CPU topo: Num. cores per package: 4 Jul 11 05:22:36.828388 kernel: CPU topo: Num. threads per package: 4 Jul 11 05:22:36.828395 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 11 05:22:36.828403 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 05:22:36.828410 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 05:22:36.828418 kernel: kvm-guest: setup PV sched yield Jul 11 05:22:36.828428 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 11 05:22:36.828435 kernel: Booting paravirtualized kernel on KVM Jul 11 05:22:36.828443 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 05:22:36.828451 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 05:22:36.828459 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 11 05:22:36.828466 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 11 05:22:36.828474 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 05:22:36.828481 kernel: kvm-guest: PV spinlocks enabled Jul 11 05:22:36.828488 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 05:22:36.828500 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 05:22:36.828508 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 05:22:36.828515 kernel: random: crng init done Jul 11 05:22:36.828523 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 05:22:36.828531 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 05:22:36.828538 kernel: Fallback order for Node 0: 0 Jul 11 05:22:36.828546 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 11 05:22:36.828553 kernel: Policy zone: DMA32 Jul 11 05:22:36.828561 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 05:22:36.828571 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 05:22:36.828578 kernel: ftrace: allocating 40097 entries in 157 pages Jul 11 05:22:36.828586 kernel: ftrace: allocated 157 pages with 5 groups Jul 11 05:22:36.828593 kernel: Dynamic Preempt: voluntary Jul 11 05:22:36.828601 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 05:22:36.828609 kernel: rcu: RCU event tracing is enabled. Jul 11 05:22:36.828617 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 05:22:36.828625 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 05:22:36.828635 kernel: Rude variant of Tasks RCU enabled. Jul 11 05:22:36.828644 kernel: Tracing variant of Tasks RCU enabled. Jul 11 05:22:36.828652 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 05:22:36.828660 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 05:22:36.828667 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 05:22:36.828675 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 05:22:36.828683 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 05:22:36.828691 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 05:22:36.828698 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 05:22:36.828715 kernel: Console: colour VGA+ 80x25 Jul 11 05:22:36.828723 kernel: printk: legacy console [ttyS0] enabled Jul 11 05:22:36.828731 kernel: ACPI: Core revision 20240827 Jul 11 05:22:36.828739 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 05:22:36.828749 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 05:22:36.828757 kernel: x2apic enabled Jul 11 05:22:36.828765 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 05:22:36.828776 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 05:22:36.828784 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 05:22:36.828794 kernel: kvm-guest: setup PV IPIs Jul 11 05:22:36.828802 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 05:22:36.828810 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 11 05:22:36.828818 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 11 05:22:36.828826 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 05:22:36.828834 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 05:22:36.828849 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 05:22:36.828857 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 05:22:36.828868 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 05:22:36.828889 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 05:22:36.828897 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 05:22:36.828905 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 05:22:36.828913 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 05:22:36.828921 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 05:22:36.828929 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 05:22:36.828938 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 05:22:36.828949 kernel: x86/bugs: return thunk changed Jul 11 05:22:36.828956 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 05:22:36.828964 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 05:22:36.828972 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 05:22:36.828980 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 05:22:36.828988 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 05:22:36.828996 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 05:22:36.829004 kernel: Freeing SMP alternatives memory: 32K Jul 11 05:22:36.829012 kernel: pid_max: default: 32768 minimum: 301 Jul 11 05:22:36.829022 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 11 05:22:36.829030 kernel: landlock: Up and running. Jul 11 05:22:36.829038 kernel: SELinux: Initializing. Jul 11 05:22:36.829046 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 05:22:36.829056 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 05:22:36.829065 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 05:22:36.829072 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 05:22:36.829080 kernel: ... version: 0 Jul 11 05:22:36.829088 kernel: ... bit width: 48 Jul 11 05:22:36.829099 kernel: ... generic registers: 6 Jul 11 05:22:36.829106 kernel: ... value mask: 0000ffffffffffff Jul 11 05:22:36.829114 kernel: ... max period: 00007fffffffffff Jul 11 05:22:36.829122 kernel: ... fixed-purpose events: 0 Jul 11 05:22:36.829130 kernel: ... event mask: 000000000000003f Jul 11 05:22:36.829138 kernel: signal: max sigframe size: 1776 Jul 11 05:22:36.829146 kernel: rcu: Hierarchical SRCU implementation. Jul 11 05:22:36.829154 kernel: rcu: Max phase no-delay instances is 400. Jul 11 05:22:36.829162 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 11 05:22:36.829170 kernel: smp: Bringing up secondary CPUs ... Jul 11 05:22:36.829180 kernel: smpboot: x86: Booting SMP configuration: Jul 11 05:22:36.829188 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 05:22:36.829196 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 05:22:36.829204 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 11 05:22:36.829212 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54620K init, 2348K bss, 136904K reserved, 0K cma-reserved) Jul 11 05:22:36.829220 kernel: devtmpfs: initialized Jul 11 05:22:36.829228 kernel: x86/mm: Memory block size: 128MB Jul 11 05:22:36.829236 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 05:22:36.829244 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 05:22:36.829254 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 05:22:36.829262 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 05:22:36.829272 kernel: audit: initializing netlink subsys (disabled) Jul 11 05:22:36.829280 kernel: audit: type=2000 audit(1752211354.257:1): state=initialized audit_enabled=0 res=1 Jul 11 05:22:36.829288 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 05:22:36.829296 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 05:22:36.829304 kernel: cpuidle: using governor menu Jul 11 05:22:36.829312 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 05:22:36.829320 kernel: dca service started, version 1.12.1 Jul 11 05:22:36.829330 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 11 05:22:36.829338 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 11 05:22:36.829346 kernel: PCI: Using configuration type 1 for base access Jul 11 05:22:36.829354 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 05:22:36.829362 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 05:22:36.829370 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 05:22:36.829378 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 05:22:36.829386 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 05:22:36.829396 kernel: ACPI: Added _OSI(Module Device) Jul 11 05:22:36.829404 kernel: ACPI: Added _OSI(Processor Device) Jul 11 05:22:36.829412 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 05:22:36.829420 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 05:22:36.829428 kernel: ACPI: Interpreter enabled Jul 11 05:22:36.829436 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 05:22:36.829444 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 05:22:36.829452 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 05:22:36.829459 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 05:22:36.829467 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 05:22:36.829478 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 05:22:36.829803 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 05:22:36.829967 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 05:22:36.830092 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 05:22:36.830103 kernel: PCI host bridge to bus 0000:00 Jul 11 05:22:36.830237 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 05:22:36.830357 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 05:22:36.830468 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 05:22:36.830581 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 11 05:22:36.830692 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 05:22:36.830803 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 11 05:22:36.830952 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 05:22:36.831156 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 11 05:22:36.831308 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 11 05:22:36.831433 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 11 05:22:36.831555 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 11 05:22:36.831676 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 11 05:22:36.831797 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 05:22:36.832001 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 05:22:36.832135 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 11 05:22:36.832258 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 11 05:22:36.832381 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 11 05:22:36.832522 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 11 05:22:36.832657 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 11 05:22:36.832782 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 11 05:22:36.832938 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 11 05:22:36.833083 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 11 05:22:36.833224 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 11 05:22:36.833349 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 11 05:22:36.833470 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 11 05:22:36.833591 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 11 05:22:36.833731 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 11 05:22:36.833865 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 05:22:36.834048 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 11 05:22:36.834172 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 11 05:22:36.834293 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 11 05:22:36.834436 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 11 05:22:36.834557 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 11 05:22:36.834568 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 05:22:36.834576 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 05:22:36.834589 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 05:22:36.834597 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 05:22:36.834605 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 05:22:36.834612 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 05:22:36.834621 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 05:22:36.834628 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 05:22:36.834636 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 05:22:36.834644 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 05:22:36.834652 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 05:22:36.834662 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 05:22:36.834670 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 05:22:36.834678 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 05:22:36.834686 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 05:22:36.834694 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 05:22:36.834702 kernel: iommu: Default domain type: Translated Jul 11 05:22:36.834710 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 05:22:36.834718 kernel: PCI: Using ACPI for IRQ routing Jul 11 05:22:36.834726 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 05:22:36.834735 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 11 05:22:36.834743 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 11 05:22:36.834875 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 05:22:36.835019 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 05:22:36.835140 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 05:22:36.835151 kernel: vgaarb: loaded Jul 11 05:22:36.835159 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 05:22:36.835167 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 05:22:36.835178 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 05:22:36.835186 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 05:22:36.835195 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 05:22:36.835203 kernel: pnp: PnP ACPI init Jul 11 05:22:36.835354 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 11 05:22:36.835367 kernel: pnp: PnP ACPI: found 6 devices Jul 11 05:22:36.835375 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 05:22:36.835383 kernel: NET: Registered PF_INET protocol family Jul 11 05:22:36.835394 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 05:22:36.835402 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 05:22:36.835410 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 05:22:36.835418 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 05:22:36.835426 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 05:22:36.835434 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 05:22:36.835442 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 05:22:36.835450 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 05:22:36.835458 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 05:22:36.835468 kernel: NET: Registered PF_XDP protocol family Jul 11 05:22:36.835582 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 05:22:36.835694 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 05:22:36.835805 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 05:22:36.835959 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 11 05:22:36.836074 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 11 05:22:36.836184 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 11 05:22:36.836195 kernel: PCI: CLS 0 bytes, default 64 Jul 11 05:22:36.836207 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 11 05:22:36.836215 kernel: Initialise system trusted keyrings Jul 11 05:22:36.836223 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 05:22:36.836231 kernel: Key type asymmetric registered Jul 11 05:22:36.836239 kernel: Asymmetric key parser 'x509' registered Jul 11 05:22:36.836247 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 05:22:36.836255 kernel: io scheduler mq-deadline registered Jul 11 05:22:36.836263 kernel: io scheduler kyber registered Jul 11 05:22:36.836271 kernel: io scheduler bfq registered Jul 11 05:22:36.836279 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 05:22:36.836289 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 05:22:36.836297 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 05:22:36.836305 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 05:22:36.836313 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 05:22:36.836321 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 05:22:36.836329 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 05:22:36.836337 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 05:22:36.836345 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 05:22:36.836482 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 05:22:36.836497 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 05:22:36.836612 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 05:22:36.836731 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T05:22:36 UTC (1752211356) Jul 11 05:22:36.836857 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 11 05:22:36.836868 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 05:22:36.836893 kernel: NET: Registered PF_INET6 protocol family Jul 11 05:22:36.836901 kernel: Segment Routing with IPv6 Jul 11 05:22:36.836913 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 05:22:36.836921 kernel: NET: Registered PF_PACKET protocol family Jul 11 05:22:36.836929 kernel: Key type dns_resolver registered Jul 11 05:22:36.836937 kernel: IPI shorthand broadcast: enabled Jul 11 05:22:36.836945 kernel: sched_clock: Marking stable (3227003363, 110892666)->(3355215808, -17319779) Jul 11 05:22:36.836953 kernel: registered taskstats version 1 Jul 11 05:22:36.836961 kernel: Loading compiled-in X.509 certificates Jul 11 05:22:36.836969 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 9703a4b3d6547675037b9597aa24472a5380cc2e' Jul 11 05:22:36.836977 kernel: Demotion targets for Node 0: null Jul 11 05:22:36.836987 kernel: Key type .fscrypt registered Jul 11 05:22:36.836994 kernel: Key type fscrypt-provisioning registered Jul 11 05:22:36.837003 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 05:22:36.837011 kernel: ima: Allocated hash algorithm: sha1 Jul 11 05:22:36.837018 kernel: ima: No architecture policies found Jul 11 05:22:36.837026 kernel: clk: Disabling unused clocks Jul 11 05:22:36.837034 kernel: Warning: unable to open an initial console. Jul 11 05:22:36.837042 kernel: Freeing unused kernel image (initmem) memory: 54620K Jul 11 05:22:36.837050 kernel: Write protecting the kernel read-only data: 24576k Jul 11 05:22:36.837060 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 11 05:22:36.837068 kernel: Run /init as init process Jul 11 05:22:36.837076 kernel: with arguments: Jul 11 05:22:36.837096 kernel: /init Jul 11 05:22:36.837113 kernel: with environment: Jul 11 05:22:36.837131 kernel: HOME=/ Jul 11 05:22:36.837139 kernel: TERM=linux Jul 11 05:22:36.837146 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 05:22:36.837158 systemd[1]: Successfully made /usr/ read-only. Jul 11 05:22:36.837173 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 05:22:36.837194 systemd[1]: Detected virtualization kvm. Jul 11 05:22:36.837203 systemd[1]: Detected architecture x86-64. Jul 11 05:22:36.837211 systemd[1]: Running in initrd. Jul 11 05:22:36.837219 systemd[1]: No hostname configured, using default hostname. Jul 11 05:22:36.837230 systemd[1]: Hostname set to . Jul 11 05:22:36.837239 systemd[1]: Initializing machine ID from VM UUID. Jul 11 05:22:36.837247 systemd[1]: Queued start job for default target initrd.target. Jul 11 05:22:36.837256 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 05:22:36.837264 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 05:22:36.837273 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 05:22:36.837282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 05:22:36.837291 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 05:22:36.837302 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 05:22:36.837312 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 05:22:36.837321 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 05:22:36.837330 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 05:22:36.837338 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 05:22:36.837347 systemd[1]: Reached target paths.target - Path Units. Jul 11 05:22:36.837355 systemd[1]: Reached target slices.target - Slice Units. Jul 11 05:22:36.837366 systemd[1]: Reached target swap.target - Swaps. Jul 11 05:22:36.837375 systemd[1]: Reached target timers.target - Timer Units. Jul 11 05:22:36.837383 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 05:22:36.837392 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 05:22:36.837400 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 05:22:36.837409 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 11 05:22:36.837417 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 05:22:36.837426 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 05:22:36.837435 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 05:22:36.837445 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 05:22:36.837454 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 05:22:36.837462 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 05:22:36.837471 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 05:22:36.837480 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 11 05:22:36.837492 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 05:22:36.837501 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 05:22:36.837509 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 05:22:36.837518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:22:36.837527 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 05:22:36.837536 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 05:22:36.837547 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 05:22:36.837556 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 05:22:36.837640 systemd-journald[220]: Collecting audit messages is disabled. Jul 11 05:22:36.837663 systemd-journald[220]: Journal started Jul 11 05:22:36.837684 systemd-journald[220]: Runtime Journal (/run/log/journal/b08593ecc3fd47f8a78462028a8c6e1f) is 6M, max 48.6M, 42.5M free. Jul 11 05:22:36.826598 systemd-modules-load[221]: Inserted module 'overlay' Jul 11 05:22:36.868297 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 05:22:36.868322 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 05:22:36.868343 kernel: Bridge firewalling registered Jul 11 05:22:36.855957 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 11 05:22:36.870384 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 05:22:36.887012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:22:36.890218 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 05:22:36.896278 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 05:22:36.899626 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 05:22:36.905158 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 05:22:36.908997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 05:22:36.919048 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:22:36.919674 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 05:22:36.921650 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 05:22:36.925624 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 05:22:36.930775 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 11 05:22:36.943060 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 05:22:36.945361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 05:22:36.979454 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 05:22:37.003532 systemd-resolved[263]: Positive Trust Anchors: Jul 11 05:22:37.003551 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 05:22:37.003586 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 05:22:37.006943 systemd-resolved[263]: Defaulting to hostname 'linux'. Jul 11 05:22:37.008352 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 05:22:37.013270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 05:22:37.096929 kernel: SCSI subsystem initialized Jul 11 05:22:37.105915 kernel: Loading iSCSI transport class v2.0-870. Jul 11 05:22:37.116918 kernel: iscsi: registered transport (tcp) Jul 11 05:22:37.137914 kernel: iscsi: registered transport (qla4xxx) Jul 11 05:22:37.137948 kernel: QLogic iSCSI HBA Driver Jul 11 05:22:37.160018 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 05:22:37.183733 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 05:22:37.185359 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 05:22:37.243210 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 05:22:37.246286 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 05:22:37.308973 kernel: raid6: avx2x4 gen() 23112 MB/s Jul 11 05:22:37.325931 kernel: raid6: avx2x2 gen() 25882 MB/s Jul 11 05:22:37.342953 kernel: raid6: avx2x1 gen() 23550 MB/s Jul 11 05:22:37.343018 kernel: raid6: using algorithm avx2x2 gen() 25882 MB/s Jul 11 05:22:37.361144 kernel: raid6: .... xor() 17986 MB/s, rmw enabled Jul 11 05:22:37.361211 kernel: raid6: using avx2x2 recovery algorithm Jul 11 05:22:37.398926 kernel: xor: automatically using best checksumming function avx Jul 11 05:22:37.576922 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 05:22:37.585765 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 05:22:37.588386 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 05:22:37.620287 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 11 05:22:37.626412 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 05:22:37.627611 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 05:22:37.652196 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jul 11 05:22:37.684869 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 05:22:37.686687 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 05:22:37.762140 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 05:22:37.768007 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 05:22:37.802922 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 05:22:37.805929 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 05:22:37.812953 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 05:22:37.813001 kernel: GPT:9289727 != 19775487 Jul 11 05:22:37.813016 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 05:22:37.813029 kernel: GPT:9289727 != 19775487 Jul 11 05:22:37.813042 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 05:22:37.813055 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:22:37.817904 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 05:22:37.820915 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 11 05:22:37.830915 kernel: AES CTR mode by8 optimization enabled Jul 11 05:22:37.847419 kernel: libata version 3.00 loaded. Jul 11 05:22:37.846113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 05:22:37.846234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:22:37.854136 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:22:37.866990 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 05:22:37.867194 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 05:22:37.867207 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 11 05:22:37.867351 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 11 05:22:37.867489 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 05:22:37.867970 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:22:37.869621 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 05:22:37.878902 kernel: scsi host0: ahci Jul 11 05:22:37.879148 kernel: scsi host1: ahci Jul 11 05:22:37.880938 kernel: scsi host2: ahci Jul 11 05:22:37.881202 kernel: scsi host3: ahci Jul 11 05:22:37.882142 kernel: scsi host4: ahci Jul 11 05:22:37.882909 kernel: scsi host5: ahci Jul 11 05:22:37.883089 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 11 05:22:37.884749 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 11 05:22:37.884771 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 11 05:22:37.886638 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 11 05:22:37.886662 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 11 05:22:37.888510 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 11 05:22:37.895900 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 05:22:37.917094 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 05:22:37.952044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:22:37.962093 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 05:22:37.969093 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 05:22:37.969503 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 05:22:37.970657 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 05:22:38.022322 disk-uuid[634]: Primary Header is updated. Jul 11 05:22:38.022322 disk-uuid[634]: Secondary Entries is updated. Jul 11 05:22:38.022322 disk-uuid[634]: Secondary Header is updated. Jul 11 05:22:38.026919 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:22:38.031917 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:22:38.194935 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 05:22:38.202341 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 05:22:38.202428 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 05:22:38.202440 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 05:22:38.203917 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 05:22:38.204003 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 05:22:38.204905 kernel: ata3.00: applying bridge limits Jul 11 05:22:38.205908 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 05:22:38.205935 kernel: ata3.00: configured for UDMA/100 Jul 11 05:22:38.206909 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 05:22:38.253444 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 05:22:38.253719 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 05:22:38.273905 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 05:22:38.644695 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 05:22:38.645693 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 05:22:38.647172 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 05:22:38.647474 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 05:22:38.648705 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 05:22:38.675561 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 05:22:39.032931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:22:39.033489 disk-uuid[635]: The operation has completed successfully. Jul 11 05:22:39.064702 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 05:22:39.064845 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 05:22:39.098608 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 05:22:39.128428 sh[663]: Success Jul 11 05:22:39.150366 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 05:22:39.150415 kernel: device-mapper: uevent: version 1.0.3 Jul 11 05:22:39.151492 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 11 05:22:39.160906 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 11 05:22:39.195716 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 05:22:39.202073 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 05:22:39.216102 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 05:22:39.223978 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 11 05:22:39.224079 kernel: BTRFS: device fsid 5947ac9d-360e-47c3-9a17-c6b228910c06 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (675) Jul 11 05:22:39.226823 kernel: BTRFS info (device dm-0): first mount of filesystem 5947ac9d-360e-47c3-9a17-c6b228910c06 Jul 11 05:22:39.226854 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 05:22:39.227731 kernel: BTRFS info (device dm-0): using free-space-tree Jul 11 05:22:39.233843 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 05:22:39.235003 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 11 05:22:39.236388 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 05:22:39.237448 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 05:22:39.239684 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 05:22:39.277921 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (707) Jul 11 05:22:39.280144 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:22:39.280170 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 05:22:39.280202 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 05:22:39.287908 kernel: BTRFS info (device vda6): last unmount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:22:39.289387 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 05:22:39.291955 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 05:22:39.380699 ignition[749]: Ignition 2.21.0 Jul 11 05:22:39.380714 ignition[749]: Stage: fetch-offline Jul 11 05:22:39.380750 ignition[749]: no configs at "/usr/lib/ignition/base.d" Jul 11 05:22:39.380760 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:22:39.380865 ignition[749]: parsed url from cmdline: "" Jul 11 05:22:39.380869 ignition[749]: no config URL provided Jul 11 05:22:39.380874 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 05:22:39.385968 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 05:22:39.380899 ignition[749]: no config at "/usr/lib/ignition/user.ign" Jul 11 05:22:39.380922 ignition[749]: op(1): [started] loading QEMU firmware config module Jul 11 05:22:39.390358 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 05:22:39.380927 ignition[749]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 05:22:39.393666 ignition[749]: op(1): [finished] loading QEMU firmware config module Jul 11 05:22:39.431205 ignition[749]: parsing config with SHA512: 77f382a74e6d75d959fce91c14ada742bd00fc39e7da701e169f3c9e2df08d7d2373d54e8a75db6a92399d2925cb5f5b42ad1fa3cd8691ad9739f31ef7900652 Jul 11 05:22:39.434986 unknown[749]: fetched base config from "system" Jul 11 05:22:39.435000 unknown[749]: fetched user config from "qemu" Jul 11 05:22:39.435399 ignition[749]: fetch-offline: fetch-offline passed Jul 11 05:22:39.435459 ignition[749]: Ignition finished successfully Jul 11 05:22:39.437990 systemd-networkd[852]: lo: Link UP Jul 11 05:22:39.437994 systemd-networkd[852]: lo: Gained carrier Jul 11 05:22:39.439133 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 05:22:39.439588 systemd-networkd[852]: Enumeration completed Jul 11 05:22:39.440104 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:22:39.440108 systemd-networkd[852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 05:22:39.440483 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 05:22:39.440609 systemd-networkd[852]: eth0: Link UP Jul 11 05:22:39.440613 systemd-networkd[852]: eth0: Gained carrier Jul 11 05:22:39.440622 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:22:39.443132 systemd[1]: Reached target network.target - Network. Jul 11 05:22:39.444037 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 05:22:39.444901 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 05:22:39.452928 systemd-networkd[852]: eth0: DHCPv4 address 10.0.0.87/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 05:22:39.478839 ignition[856]: Ignition 2.21.0 Jul 11 05:22:39.478854 ignition[856]: Stage: kargs Jul 11 05:22:39.479045 ignition[856]: no configs at "/usr/lib/ignition/base.d" Jul 11 05:22:39.479062 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:22:39.479986 ignition[856]: kargs: kargs passed Jul 11 05:22:39.480029 ignition[856]: Ignition finished successfully Jul 11 05:22:39.484764 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 05:22:39.486398 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 05:22:39.520971 ignition[865]: Ignition 2.21.0 Jul 11 05:22:39.520985 ignition[865]: Stage: disks Jul 11 05:22:39.521138 ignition[865]: no configs at "/usr/lib/ignition/base.d" Jul 11 05:22:39.521151 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:22:39.524135 ignition[865]: disks: disks passed Jul 11 05:22:39.524228 ignition[865]: Ignition finished successfully Jul 11 05:22:39.529462 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 05:22:39.530049 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 05:22:39.530364 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 05:22:39.530740 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 05:22:39.531319 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 05:22:39.531693 systemd[1]: Reached target basic.target - Basic System. Jul 11 05:22:39.533258 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 05:22:39.566498 systemd-fsck[876]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 11 05:22:39.575391 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 05:22:39.577010 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 05:22:39.696910 kernel: EXT4-fs (vda9): mounted filesystem 68e263c6-913a-4fa8-894f-6e89b186e148 r/w with ordered data mode. Quota mode: none. Jul 11 05:22:39.698148 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 05:22:39.699257 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 05:22:39.701470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 05:22:39.704174 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 05:22:39.704709 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 05:22:39.704752 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 05:22:39.704788 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 05:22:39.720679 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 05:22:39.724673 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (884) Jul 11 05:22:39.724701 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:22:39.724716 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 05:22:39.724731 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 05:22:39.725064 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 05:22:39.730755 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 05:22:39.776950 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 05:22:39.782330 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Jul 11 05:22:39.786271 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 05:22:39.791200 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 05:22:39.877178 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 05:22:39.879361 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 05:22:39.881116 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 05:22:39.908922 kernel: BTRFS info (device vda6): last unmount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:22:39.921339 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 05:22:39.937829 ignition[999]: INFO : Ignition 2.21.0 Jul 11 05:22:39.937829 ignition[999]: INFO : Stage: mount Jul 11 05:22:39.939761 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 05:22:39.939761 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:22:39.943278 ignition[999]: INFO : mount: mount passed Jul 11 05:22:39.944119 ignition[999]: INFO : Ignition finished successfully Jul 11 05:22:39.947748 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 05:22:39.949959 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 05:22:40.224324 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 05:22:40.226254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 05:22:40.259214 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Jul 11 05:22:40.259256 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:22:40.259268 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 05:22:40.260916 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 05:22:40.264118 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 05:22:40.301188 ignition[1029]: INFO : Ignition 2.21.0 Jul 11 05:22:40.301188 ignition[1029]: INFO : Stage: files Jul 11 05:22:40.303208 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 05:22:40.303208 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:22:40.305544 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Jul 11 05:22:40.306752 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 05:22:40.306752 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 05:22:40.309742 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 05:22:40.309742 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 05:22:40.309742 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 05:22:40.308943 unknown[1029]: wrote ssh authorized keys file for user: core Jul 11 05:22:40.315166 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 05:22:40.315166 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 11 05:22:40.368280 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 05:22:40.764124 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 05:22:40.764124 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 05:22:40.769541 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 11 05:22:41.219155 systemd-networkd[852]: eth0: Gained IPv6LL Jul 11 05:22:41.281294 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 05:22:41.588483 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 05:22:41.588483 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 11 05:22:41.592449 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 05:22:41.592449 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 05:22:41.592449 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 05:22:41.592449 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 05:22:41.592449 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 05:22:41.592449 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 05:22:41.592449 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 05:22:41.644851 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 05:22:41.647806 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 05:22:41.647806 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 05:22:41.652761 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 05:22:41.652761 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 05:22:41.652761 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 11 05:22:42.147997 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 11 05:22:42.818374 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 05:22:42.818374 ignition[1029]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 11 05:22:42.822904 ignition[1029]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 05:22:42.825103 ignition[1029]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 05:22:42.825103 ignition[1029]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 11 05:22:42.825103 ignition[1029]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 11 05:22:42.825103 ignition[1029]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 05:22:42.825103 ignition[1029]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 05:22:42.825103 ignition[1029]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 11 05:22:42.825103 ignition[1029]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 05:22:42.857547 ignition[1029]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 05:22:42.862234 ignition[1029]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 05:22:42.863975 ignition[1029]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 05:22:42.863975 ignition[1029]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 11 05:22:42.863975 ignition[1029]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 05:22:42.863975 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 05:22:42.863975 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 05:22:42.863975 ignition[1029]: INFO : files: files passed Jul 11 05:22:42.863975 ignition[1029]: INFO : Ignition finished successfully Jul 11 05:22:42.868007 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 05:22:42.870360 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 05:22:42.876708 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 05:22:42.885969 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 05:22:42.886139 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 05:22:42.888983 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 05:22:42.892983 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 05:22:42.892983 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 05:22:42.896231 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 05:22:42.898589 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 05:22:42.900648 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 05:22:42.903414 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 05:22:42.966397 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 05:22:42.966546 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 05:22:42.968047 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 05:22:42.970724 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 05:22:42.971406 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 05:22:42.972691 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 05:22:43.004742 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 05:22:43.007514 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 05:22:43.030936 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 05:22:43.033577 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 05:22:43.034926 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 05:22:43.035523 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 05:22:43.035677 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 05:22:43.039917 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 05:22:43.040384 systemd[1]: Stopped target basic.target - Basic System. Jul 11 05:22:43.040712 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 05:22:43.052576 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 05:22:43.052901 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 05:22:43.053368 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 11 05:22:43.053699 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 05:22:43.054190 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 05:22:43.054517 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 05:22:43.054850 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 05:22:43.055324 systemd[1]: Stopped target swap.target - Swaps. Jul 11 05:22:43.055615 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 05:22:43.055763 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 05:22:43.073923 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 05:22:43.074554 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 05:22:43.074852 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 05:22:43.078739 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 05:22:43.079330 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 05:22:43.079480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 05:22:43.084828 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 05:22:43.085027 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 05:22:43.085439 systemd[1]: Stopped target paths.target - Path Units. Jul 11 05:22:43.089628 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 05:22:43.094031 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 05:22:43.096926 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 05:22:43.097272 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 05:22:43.098980 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 05:22:43.099093 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 05:22:43.099431 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 05:22:43.099514 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 05:22:43.102390 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 05:22:43.102525 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 05:22:43.104325 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 05:22:43.104432 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 05:22:43.107259 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 05:22:43.113515 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 05:22:43.114408 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 05:22:43.114565 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 05:22:43.114964 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 05:22:43.115086 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 05:22:43.121495 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 05:22:43.121622 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 05:22:43.143140 ignition[1085]: INFO : Ignition 2.21.0 Jul 11 05:22:43.143140 ignition[1085]: INFO : Stage: umount Jul 11 05:22:43.144922 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 05:22:43.144922 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:22:43.147064 ignition[1085]: INFO : umount: umount passed Jul 11 05:22:43.147064 ignition[1085]: INFO : Ignition finished successfully Jul 11 05:22:43.147970 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 05:22:43.148143 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 05:22:43.148672 systemd[1]: Stopped target network.target - Network. Jul 11 05:22:43.151118 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 05:22:43.151175 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 05:22:43.151506 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 05:22:43.151553 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 05:22:43.151911 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 05:22:43.151963 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 05:22:43.152409 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 05:22:43.152452 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 05:22:43.152825 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 05:22:43.153266 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 05:22:43.165646 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 05:22:43.165832 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 05:22:43.172215 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 11 05:22:43.173029 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 05:22:43.173118 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 05:22:43.182147 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 11 05:22:43.182523 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 05:22:43.182643 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 05:22:43.187433 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 11 05:22:43.188204 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 11 05:22:43.191002 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 05:22:43.191054 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 05:22:43.195950 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 05:22:43.196432 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 05:22:43.196516 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 05:22:43.197232 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 05:22:43.197282 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:22:43.204110 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 05:22:43.204285 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 05:22:43.208080 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 05:22:43.212438 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 05:22:43.221101 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 05:22:43.222855 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 05:22:43.227088 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 05:22:43.228321 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 05:22:43.228418 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 05:22:43.230267 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 05:22:43.230346 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 05:22:43.232160 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 05:22:43.232226 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 05:22:43.232834 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 05:22:43.232907 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 05:22:43.233670 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 05:22:43.233738 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 05:22:43.235638 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 05:22:43.240568 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 11 05:22:43.240670 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 05:22:43.248016 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 05:22:43.248075 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 05:22:43.251377 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 11 05:22:43.251431 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 05:22:43.254743 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 05:22:43.254794 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 05:22:43.255272 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 05:22:43.255318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:22:43.260822 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 05:22:43.273117 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 05:22:43.273767 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 05:22:43.273897 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 05:22:43.276257 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 05:22:43.276368 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 05:22:43.283434 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 05:22:43.283606 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 05:22:43.284671 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 05:22:43.289701 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 05:22:43.315365 systemd[1]: Switching root. Jul 11 05:22:43.356931 systemd-journald[220]: Journal stopped Jul 11 05:22:44.614233 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 11 05:22:44.614305 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 05:22:44.614328 kernel: SELinux: policy capability open_perms=1 Jul 11 05:22:44.614340 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 05:22:44.614351 kernel: SELinux: policy capability always_check_network=0 Jul 11 05:22:44.614363 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 05:22:44.614375 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 05:22:44.614386 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 05:22:44.614397 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 05:22:44.614410 kernel: SELinux: policy capability userspace_initial_context=0 Jul 11 05:22:44.614428 kernel: audit: type=1403 audit(1752211363.795:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 05:22:44.614443 systemd[1]: Successfully loaded SELinux policy in 63.876ms. Jul 11 05:22:44.614470 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.869ms. Jul 11 05:22:44.614483 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 05:22:44.614496 systemd[1]: Detected virtualization kvm. Jul 11 05:22:44.614509 systemd[1]: Detected architecture x86-64. Jul 11 05:22:44.614521 systemd[1]: Detected first boot. Jul 11 05:22:44.614533 systemd[1]: Initializing machine ID from VM UUID. Jul 11 05:22:44.614545 zram_generator::config[1131]: No configuration found. Jul 11 05:22:44.614561 kernel: Guest personality initialized and is inactive Jul 11 05:22:44.614573 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 11 05:22:44.614585 kernel: Initialized host personality Jul 11 05:22:44.614607 kernel: NET: Registered PF_VSOCK protocol family Jul 11 05:22:44.614621 systemd[1]: Populated /etc with preset unit settings. Jul 11 05:22:44.614634 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 11 05:22:44.614647 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 05:22:44.614669 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 05:22:44.614683 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 05:22:44.614699 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 05:22:44.614713 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 05:22:44.614725 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 05:22:44.614738 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 05:22:44.614750 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 05:22:44.614763 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 05:22:44.614775 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 05:22:44.614788 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 05:22:44.614803 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 05:22:44.614815 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 05:22:44.614828 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 05:22:44.614840 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 05:22:44.614853 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 05:22:44.614866 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 05:22:44.614897 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 05:22:44.614910 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 05:22:44.614925 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 05:22:44.614937 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 05:22:44.614955 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 05:22:44.614967 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 05:22:44.614979 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 05:22:44.614991 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 05:22:44.615003 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 05:22:44.615018 systemd[1]: Reached target slices.target - Slice Units. Jul 11 05:22:44.615030 systemd[1]: Reached target swap.target - Swaps. Jul 11 05:22:44.615044 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 05:22:44.615059 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 05:22:44.615075 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 11 05:22:44.615091 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 05:22:44.615106 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 05:22:44.615122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 05:22:44.615138 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 05:22:44.615155 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 05:22:44.615172 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 05:22:44.615200 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 05:22:44.615219 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:22:44.615237 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 05:22:44.615254 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 05:22:44.615271 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 05:22:44.615288 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 05:22:44.615305 systemd[1]: Reached target machines.target - Containers. Jul 11 05:22:44.615322 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 05:22:44.615339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:22:44.615367 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 05:22:44.615384 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 05:22:44.615408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:22:44.615426 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 05:22:44.615442 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:22:44.615545 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 05:22:44.615564 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:22:44.615581 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 05:22:44.615610 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 05:22:44.615628 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 05:22:44.615643 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 05:22:44.615670 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 05:22:44.615689 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:22:44.615725 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 05:22:44.615743 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 05:22:44.615759 kernel: fuse: init (API version 7.41) Jul 11 05:22:44.615774 kernel: loop: module loaded Jul 11 05:22:44.615801 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 05:22:44.615819 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 05:22:44.615836 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 11 05:22:44.615852 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 05:22:44.615870 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 05:22:44.615923 systemd[1]: Stopped verity-setup.service. Jul 11 05:22:44.615940 kernel: ACPI: bus type drm_connector registered Jul 11 05:22:44.615957 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:22:44.615973 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 05:22:44.615990 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 05:22:44.616005 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 05:22:44.616021 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 05:22:44.616041 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 05:22:44.616058 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 05:22:44.616106 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 05:22:44.616120 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 05:22:44.616164 systemd-journald[1195]: Collecting audit messages is disabled. Jul 11 05:22:44.616196 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 05:22:44.616214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:22:44.616228 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:22:44.616243 systemd-journald[1195]: Journal started Jul 11 05:22:44.616269 systemd-journald[1195]: Runtime Journal (/run/log/journal/b08593ecc3fd47f8a78462028a8c6e1f) is 6M, max 48.6M, 42.5M free. Jul 11 05:22:44.347327 systemd[1]: Queued start job for default target multi-user.target. Jul 11 05:22:44.360528 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 05:22:44.361104 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 05:22:44.619907 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 05:22:44.621646 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 05:22:44.621922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 05:22:44.623510 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 05:22:44.625177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:22:44.625402 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:22:44.627106 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 05:22:44.627334 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 05:22:44.628815 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:22:44.629176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:22:44.630872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 05:22:44.632631 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 05:22:44.634498 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 05:22:44.636344 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 11 05:22:44.653870 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 05:22:44.656682 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 05:22:44.661013 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 05:22:44.662301 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 05:22:44.662335 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 05:22:44.664492 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 11 05:22:44.669417 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 05:22:44.671023 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:22:44.673439 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 05:22:44.675987 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 05:22:44.677299 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 05:22:44.679123 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 05:22:44.680366 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 05:22:44.682254 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 05:22:44.690044 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 05:22:44.694032 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 05:22:44.698645 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 05:22:44.700494 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 05:22:44.702154 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 05:22:44.707362 systemd-journald[1195]: Time spent on flushing to /var/log/journal/b08593ecc3fd47f8a78462028a8c6e1f is 103.663ms for 983 entries. Jul 11 05:22:44.707362 systemd-journald[1195]: System Journal (/var/log/journal/b08593ecc3fd47f8a78462028a8c6e1f) is 8M, max 195.6M, 187.6M free. Jul 11 05:22:44.831759 systemd-journald[1195]: Received client request to flush runtime journal. Jul 11 05:22:44.831833 kernel: loop0: detected capacity change from 0 to 114000 Jul 11 05:22:44.831901 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 05:22:44.716191 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:22:44.719224 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 05:22:44.721789 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 05:22:44.728045 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 11 05:22:44.830187 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Jul 11 05:22:44.830202 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Jul 11 05:22:44.833447 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 05:22:44.836624 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 05:22:44.840436 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 05:22:44.848910 kernel: loop1: detected capacity change from 0 to 224512 Jul 11 05:22:45.003383 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 05:22:45.006130 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 05:22:45.034171 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jul 11 05:22:45.034189 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jul 11 05:22:45.038507 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 05:22:45.061933 kernel: loop2: detected capacity change from 0 to 146488 Jul 11 05:22:45.092845 kernel: loop3: detected capacity change from 0 to 114000 Jul 11 05:22:45.178939 kernel: loop4: detected capacity change from 0 to 224512 Jul 11 05:22:45.207937 kernel: loop5: detected capacity change from 0 to 146488 Jul 11 05:22:45.220286 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 05:22:45.221094 (sd-merge)[1275]: Merged extensions into '/usr'. Jul 11 05:22:45.227454 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 05:22:45.227473 systemd[1]: Reloading... Jul 11 05:22:45.310911 zram_generator::config[1304]: No configuration found. Jul 11 05:22:45.464612 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:22:45.532475 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 05:22:45.573268 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 05:22:45.573482 systemd[1]: Reloading finished in 344 ms. Jul 11 05:22:45.604700 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 05:22:45.606456 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 11 05:22:45.625905 systemd[1]: Starting ensure-sysext.service... Jul 11 05:22:45.628212 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 05:22:45.654608 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Jul 11 05:22:45.654624 systemd[1]: Reloading... Jul 11 05:22:45.662293 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 11 05:22:45.662334 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 11 05:22:45.662650 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 05:22:45.662929 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 05:22:45.664009 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 05:22:45.664287 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jul 11 05:22:45.664360 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jul 11 05:22:45.668793 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 05:22:45.668809 systemd-tmpfiles[1340]: Skipping /boot Jul 11 05:22:45.681345 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 05:22:45.681523 systemd-tmpfiles[1340]: Skipping /boot Jul 11 05:22:45.711905 zram_generator::config[1371]: No configuration found. Jul 11 05:22:45.844131 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:22:45.952757 systemd[1]: Reloading finished in 297 ms. Jul 11 05:22:46.030613 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 05:22:46.056171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 05:22:46.066311 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 05:22:46.069360 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 05:22:46.072120 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 05:22:46.086087 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 05:22:46.089051 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 05:22:46.092300 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 05:22:46.100629 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:22:46.100814 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:22:46.106023 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:22:46.108923 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:22:46.111855 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:22:46.115098 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:22:46.115223 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:22:46.118581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 05:22:46.125644 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 05:22:46.126985 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:22:46.128578 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 05:22:46.130797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:22:46.131217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:22:46.133772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:22:46.134190 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:22:46.136526 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:22:46.136781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:22:46.148606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:22:46.149455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:22:46.151408 augenrules[1441]: No rules Jul 11 05:22:46.152166 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:22:46.155029 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 05:22:46.157934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:22:46.166964 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:22:46.170184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:22:46.170374 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:22:46.173414 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 05:22:46.175035 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:22:46.180973 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 05:22:46.184058 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 05:22:46.187762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:22:46.188002 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:22:46.191242 systemd-udevd[1427]: Using default interface naming scheme 'v255'. Jul 11 05:22:46.191567 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 05:22:46.191903 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 05:22:46.194813 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:22:46.196331 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:22:46.199115 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:22:46.199386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:22:46.202162 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 05:22:46.209099 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 05:22:46.211986 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 05:22:46.214384 systemd[1]: Finished ensure-sysext.service. Jul 11 05:22:46.227496 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 05:22:46.232710 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 05:22:46.233057 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 05:22:46.248449 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 05:22:46.249812 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 05:22:46.250042 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 05:22:46.282227 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 05:22:46.368591 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 05:22:46.440594 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 05:22:46.444523 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 05:22:46.444914 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 05:22:46.476911 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 11 05:22:46.483199 systemd-resolved[1410]: Positive Trust Anchors: Jul 11 05:22:46.483225 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 05:22:46.483267 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 05:22:46.487901 kernel: ACPI: button: Power Button [PWRF] Jul 11 05:22:46.491230 systemd-resolved[1410]: Defaulting to hostname 'linux'. Jul 11 05:22:46.495939 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 05:22:46.496249 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 05:22:46.499005 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 05:22:46.504943 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 05:22:46.508531 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 05:22:46.576543 systemd-networkd[1491]: lo: Link UP Jul 11 05:22:46.576935 systemd-networkd[1491]: lo: Gained carrier Jul 11 05:22:46.584270 systemd-networkd[1491]: Enumeration completed Jul 11 05:22:46.584396 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 05:22:46.586143 systemd[1]: Reached target network.target - Network. Jul 11 05:22:46.587126 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:22:46.587135 systemd-networkd[1491]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 05:22:46.588410 systemd-networkd[1491]: eth0: Link UP Jul 11 05:22:46.590058 systemd-networkd[1491]: eth0: Gained carrier Jul 11 05:22:46.590771 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:22:46.594111 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 11 05:22:46.598355 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 05:22:46.600484 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 05:22:46.608563 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 05:22:46.625969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:22:46.646967 systemd-networkd[1491]: eth0: DHCPv4 address 10.0.0.87/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 05:22:46.648399 systemd-timesyncd[1468]: Network configuration changed, trying to establish connection. Jul 11 05:22:47.635650 systemd-timesyncd[1468]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 05:22:47.635826 systemd-timesyncd[1468]: Initial clock synchronization to Fri 2025-07-11 05:22:47.635487 UTC. Jul 11 05:22:47.636013 systemd-resolved[1410]: Clock change detected. Flushing caches. Jul 11 05:22:47.640274 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 11 05:22:47.660034 kernel: kvm_amd: TSC scaling supported Jul 11 05:22:47.660132 kernel: kvm_amd: Nested Virtualization enabled Jul 11 05:22:47.660147 kernel: kvm_amd: Nested Paging enabled Jul 11 05:22:47.661019 kernel: kvm_amd: LBR virtualization supported Jul 11 05:22:47.661062 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 05:22:47.662049 kernel: kvm_amd: Virtual GIF supported Jul 11 05:22:47.723770 kernel: EDAC MC: Ver: 3.0.0 Jul 11 05:22:47.753197 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:22:47.754695 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 05:22:47.755954 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 05:22:47.757247 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 05:22:47.758582 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 11 05:22:47.759950 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 05:22:47.761407 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 05:22:47.762925 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 05:22:47.764254 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 05:22:47.764307 systemd[1]: Reached target paths.target - Path Units. Jul 11 05:22:47.765317 systemd[1]: Reached target timers.target - Timer Units. Jul 11 05:22:47.767430 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 05:22:47.770384 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 05:22:47.773578 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 11 05:22:47.775010 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 11 05:22:47.776281 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 11 05:22:47.784220 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 05:22:47.785751 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 11 05:22:47.787925 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 05:22:47.789952 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 05:22:47.790957 systemd[1]: Reached target basic.target - Basic System. Jul 11 05:22:47.791934 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 05:22:47.791964 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 05:22:47.793232 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 05:22:47.795600 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 05:22:47.797821 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 05:22:47.801944 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 05:22:47.810629 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 05:22:47.812000 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 05:22:47.812258 jq[1539]: false Jul 11 05:22:47.813777 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 11 05:22:47.817240 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 05:22:47.821428 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 05:22:47.825172 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 05:22:47.827504 extend-filesystems[1540]: Found /dev/vda6 Jul 11 05:22:47.829789 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing passwd entry cache Jul 11 05:22:47.829041 oslogin_cache_refresh[1541]: Refreshing passwd entry cache Jul 11 05:22:47.832623 extend-filesystems[1540]: Found /dev/vda9 Jul 11 05:22:47.833241 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 05:22:47.835562 extend-filesystems[1540]: Checking size of /dev/vda9 Jul 11 05:22:47.839816 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting users, quitting Jul 11 05:22:47.839816 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 05:22:47.839816 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing group entry cache Jul 11 05:22:47.839462 oslogin_cache_refresh[1541]: Failure getting users, quitting Jul 11 05:22:47.839496 oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 05:22:47.839558 oslogin_cache_refresh[1541]: Refreshing group entry cache Jul 11 05:22:47.840054 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 05:22:47.842541 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 05:22:47.843344 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 05:22:47.845187 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 05:22:47.850358 extend-filesystems[1540]: Resized partition /dev/vda9 Jul 11 05:22:47.852884 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting groups, quitting Jul 11 05:22:47.852875 oslogin_cache_refresh[1541]: Failure getting groups, quitting Jul 11 05:22:47.853815 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 05:22:47.853887 extend-filesystems[1562]: resize2fs 1.47.2 (1-Jan-2025) Jul 11 05:22:47.852895 oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 05:22:47.854790 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 05:22:47.859229 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 05:22:47.860993 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 05:22:47.861266 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 05:22:47.861657 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 11 05:22:47.861981 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 11 05:22:47.863814 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 05:22:47.864330 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 05:22:47.864644 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 05:22:47.867362 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 05:22:47.867647 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 05:22:47.871679 update_engine[1559]: I20250711 05:22:47.871587 1559 main.cc:92] Flatcar Update Engine starting Jul 11 05:22:47.892915 jq[1563]: true Jul 11 05:22:47.893278 tar[1567]: linux-amd64/LICENSE Jul 11 05:22:47.893278 tar[1567]: linux-amd64/helm Jul 11 05:22:47.911588 jq[1581]: true Jul 11 05:22:47.912956 (ntainerd)[1574]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 05:22:47.923807 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 05:22:47.945518 extend-filesystems[1562]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 05:22:47.945518 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 05:22:47.945518 extend-filesystems[1562]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 05:22:47.953602 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Jul 11 05:22:47.948373 dbus-daemon[1537]: [system] SELinux support is enabled Jul 11 05:22:47.947495 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 05:22:47.948894 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 05:22:47.954405 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 05:22:47.964655 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 05:22:47.964708 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 05:22:47.966253 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 05:22:47.966275 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 05:22:47.974411 systemd-logind[1554]: Watching system buttons on /dev/input/event2 (Power Button) Jul 11 05:22:47.974438 systemd-logind[1554]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 05:22:47.977158 update_engine[1559]: I20250711 05:22:47.976830 1559 update_check_scheduler.cc:74] Next update check in 4m57s Jul 11 05:22:47.977103 systemd[1]: Started update-engine.service - Update Engine. Jul 11 05:22:47.978371 systemd-logind[1554]: New seat seat0. Jul 11 05:22:47.986789 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 05:22:47.988764 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 05:22:48.011951 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Jul 11 05:22:48.051541 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 05:22:48.057046 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 05:22:48.127871 locksmithd[1598]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 05:22:48.158961 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 05:22:48.220677 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 05:22:48.225952 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 05:22:48.246089 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 05:22:48.246468 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 05:22:48.249814 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 05:22:48.328287 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 05:22:48.332986 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 05:22:48.335680 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 05:22:48.338146 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 05:22:48.464848 containerd[1574]: time="2025-07-11T05:22:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 11 05:22:48.469824 containerd[1574]: time="2025-07-11T05:22:48.469753156Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 11 05:22:48.555420 containerd[1574]: time="2025-07-11T05:22:48.555259501Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="22.823µs" Jul 11 05:22:48.555420 containerd[1574]: time="2025-07-11T05:22:48.555325585Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 11 05:22:48.555420 containerd[1574]: time="2025-07-11T05:22:48.555357805Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 11 05:22:48.555977 containerd[1574]: time="2025-07-11T05:22:48.555949786Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 11 05:22:48.556124 containerd[1574]: time="2025-07-11T05:22:48.556101049Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 11 05:22:48.556233 containerd[1574]: time="2025-07-11T05:22:48.556214422Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 05:22:48.556453 containerd[1574]: time="2025-07-11T05:22:48.556379682Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 05:22:48.556453 containerd[1574]: time="2025-07-11T05:22:48.556406642Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 05:22:48.556909 containerd[1574]: time="2025-07-11T05:22:48.556868188Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 05:22:48.556909 containerd[1574]: time="2025-07-11T05:22:48.556891331Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 05:22:48.556909 containerd[1574]: time="2025-07-11T05:22:48.556904085Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 05:22:48.556909 containerd[1574]: time="2025-07-11T05:22:48.556914685Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 11 05:22:48.557085 containerd[1574]: time="2025-07-11T05:22:48.557054267Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 11 05:22:48.557446 containerd[1574]: time="2025-07-11T05:22:48.557396238Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 05:22:48.557498 containerd[1574]: time="2025-07-11T05:22:48.557454177Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 05:22:48.557498 containerd[1574]: time="2025-07-11T05:22:48.557469806Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 11 05:22:48.557548 containerd[1574]: time="2025-07-11T05:22:48.557524308Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 11 05:22:48.557908 containerd[1574]: time="2025-07-11T05:22:48.557878042Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 11 05:22:48.558218 containerd[1574]: time="2025-07-11T05:22:48.558181120Z" level=info msg="metadata content store policy set" policy=shared Jul 11 05:22:48.567169 containerd[1574]: time="2025-07-11T05:22:48.567077735Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 11 05:22:48.567237 containerd[1574]: time="2025-07-11T05:22:48.567202629Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 11 05:22:48.567237 containerd[1574]: time="2025-07-11T05:22:48.567227816Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 11 05:22:48.567310 containerd[1574]: time="2025-07-11T05:22:48.567285935Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 11 05:22:48.567333 containerd[1574]: time="2025-07-11T05:22:48.567320800Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 11 05:22:48.567367 containerd[1574]: time="2025-07-11T05:22:48.567348502Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 11 05:22:48.567388 containerd[1574]: time="2025-07-11T05:22:48.567375433Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 11 05:22:48.567409 containerd[1574]: time="2025-07-11T05:22:48.567392314Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 11 05:22:48.567449 containerd[1574]: time="2025-07-11T05:22:48.567408274Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 11 05:22:48.567449 containerd[1574]: time="2025-07-11T05:22:48.567421900Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 11 05:22:48.567449 containerd[1574]: time="2025-07-11T05:22:48.567445905Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 11 05:22:48.567505 containerd[1574]: time="2025-07-11T05:22:48.567466363Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 11 05:22:48.567789 containerd[1574]: time="2025-07-11T05:22:48.567719408Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 11 05:22:48.567789 containerd[1574]: time="2025-07-11T05:22:48.567790621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 11 05:22:48.567954 containerd[1574]: time="2025-07-11T05:22:48.567813084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 11 05:22:48.567954 containerd[1574]: time="2025-07-11T05:22:48.567829715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 11 05:22:48.567954 containerd[1574]: time="2025-07-11T05:22:48.567844643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 11 05:22:48.567954 containerd[1574]: time="2025-07-11T05:22:48.567860553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 11 05:22:48.567954 containerd[1574]: time="2025-07-11T05:22:48.567877504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 11 05:22:48.567954 containerd[1574]: time="2025-07-11T05:22:48.567891771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 11 05:22:48.567954 containerd[1574]: time="2025-07-11T05:22:48.567906859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 11 05:22:48.567954 containerd[1574]: time="2025-07-11T05:22:48.567922148Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 11 05:22:48.568104 containerd[1574]: time="2025-07-11T05:22:48.567964037Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 11 05:22:48.568150 containerd[1574]: time="2025-07-11T05:22:48.568122384Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 11 05:22:48.568261 containerd[1574]: time="2025-07-11T05:22:48.568229384Z" level=info msg="Start snapshots syncer" Jul 11 05:22:48.568311 containerd[1574]: time="2025-07-11T05:22:48.568288585Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 11 05:22:48.568751 containerd[1574]: time="2025-07-11T05:22:48.568675240Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 11 05:22:48.568875 containerd[1574]: time="2025-07-11T05:22:48.568788974Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 11 05:22:48.568925 containerd[1574]: time="2025-07-11T05:22:48.568902326Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 11 05:22:48.569098 containerd[1574]: time="2025-07-11T05:22:48.569073898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 11 05:22:48.569151 containerd[1574]: time="2025-07-11T05:22:48.569105587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 11 05:22:48.569151 containerd[1574]: time="2025-07-11T05:22:48.569120365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 11 05:22:48.569151 containerd[1574]: time="2025-07-11T05:22:48.569139281Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 11 05:22:48.569208 containerd[1574]: time="2025-07-11T05:22:48.569160270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 11 05:22:48.569208 containerd[1574]: time="2025-07-11T05:22:48.569174667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 11 05:22:48.569208 containerd[1574]: time="2025-07-11T05:22:48.569188002Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 11 05:22:48.569267 containerd[1574]: time="2025-07-11T05:22:48.569219681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 11 05:22:48.569267 containerd[1574]: time="2025-07-11T05:22:48.569235251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 11 05:22:48.569267 containerd[1574]: time="2025-07-11T05:22:48.569247914Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 11 05:22:48.569408 containerd[1574]: time="2025-07-11T05:22:48.569282119Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 05:22:48.569408 containerd[1574]: time="2025-07-11T05:22:48.569303048Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 05:22:48.569408 containerd[1574]: time="2025-07-11T05:22:48.569315311Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 05:22:48.569752 containerd[1574]: time="2025-07-11T05:22:48.569356087Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 05:22:48.569752 containerd[1574]: time="2025-07-11T05:22:48.569520315Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 11 05:22:48.569752 containerd[1574]: time="2025-07-11T05:22:48.569557866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 11 05:22:48.569752 containerd[1574]: time="2025-07-11T05:22:48.569577422Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 11 05:22:48.569752 containerd[1574]: time="2025-07-11T05:22:48.569609543Z" level=info msg="runtime interface created" Jul 11 05:22:48.569752 containerd[1574]: time="2025-07-11T05:22:48.569621285Z" level=info msg="created NRI interface" Jul 11 05:22:48.569752 containerd[1574]: time="2025-07-11T05:22:48.569632846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 11 05:22:48.569752 containerd[1574]: time="2025-07-11T05:22:48.569653956Z" level=info msg="Connect containerd service" Jul 11 05:22:48.569752 containerd[1574]: time="2025-07-11T05:22:48.569708067Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 05:22:48.571105 containerd[1574]: time="2025-07-11T05:22:48.571080942Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 05:22:48.665898 tar[1567]: linux-amd64/README.md Jul 11 05:22:48.707288 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 05:22:48.857626 containerd[1574]: time="2025-07-11T05:22:48.857546437Z" level=info msg="Start subscribing containerd event" Jul 11 05:22:48.857824 containerd[1574]: time="2025-07-11T05:22:48.857638159Z" level=info msg="Start recovering state" Jul 11 05:22:48.857865 containerd[1574]: time="2025-07-11T05:22:48.857808769Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 05:22:48.857933 containerd[1574]: time="2025-07-11T05:22:48.857825180Z" level=info msg="Start event monitor" Jul 11 05:22:48.857970 containerd[1574]: time="2025-07-11T05:22:48.857936428Z" level=info msg="Start cni network conf syncer for default" Jul 11 05:22:48.857970 containerd[1574]: time="2025-07-11T05:22:48.857944774Z" level=info msg="Start streaming server" Jul 11 05:22:48.857970 containerd[1574]: time="2025-07-11T05:22:48.857956636Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 11 05:22:48.857970 containerd[1574]: time="2025-07-11T05:22:48.857914016Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 05:22:48.858098 containerd[1574]: time="2025-07-11T05:22:48.857965122Z" level=info msg="runtime interface starting up..." Jul 11 05:22:48.858098 containerd[1574]: time="2025-07-11T05:22:48.858030314Z" level=info msg="starting plugins..." Jul 11 05:22:48.858098 containerd[1574]: time="2025-07-11T05:22:48.858057546Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 11 05:22:48.858297 containerd[1574]: time="2025-07-11T05:22:48.858270966Z" level=info msg="containerd successfully booted in 0.395062s" Jul 11 05:22:48.858479 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 05:22:49.437011 systemd-networkd[1491]: eth0: Gained IPv6LL Jul 11 05:22:49.441236 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 05:22:49.443718 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 05:22:49.447660 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 05:22:49.451018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:22:49.469639 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 05:22:49.512199 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 05:22:49.632450 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 05:22:49.632857 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 05:22:49.634954 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 05:22:50.896513 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 05:22:50.899198 systemd[1]: Started sshd@0-10.0.0.87:22-10.0.0.1:53580.service - OpenSSH per-connection server daemon (10.0.0.1:53580). Jul 11 05:22:51.010291 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 53580 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:22:51.014507 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:22:51.022853 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 05:22:51.039817 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 05:22:51.049711 systemd-logind[1554]: New session 1 of user core. Jul 11 05:22:51.066080 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 05:22:51.073174 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 05:22:51.096398 (systemd)[1673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 05:22:51.099614 systemd-logind[1554]: New session c1 of user core. Jul 11 05:22:51.143879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:22:51.145931 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 05:22:51.167665 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 05:22:51.293792 systemd[1673]: Queued start job for default target default.target. Jul 11 05:22:51.314377 systemd[1673]: Created slice app.slice - User Application Slice. Jul 11 05:22:51.314407 systemd[1673]: Reached target paths.target - Paths. Jul 11 05:22:51.314454 systemd[1673]: Reached target timers.target - Timers. Jul 11 05:22:51.316313 systemd[1673]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 05:22:51.329331 systemd[1673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 05:22:51.329485 systemd[1673]: Reached target sockets.target - Sockets. Jul 11 05:22:51.329534 systemd[1673]: Reached target basic.target - Basic System. Jul 11 05:22:51.329575 systemd[1673]: Reached target default.target - Main User Target. Jul 11 05:22:51.329611 systemd[1673]: Startup finished in 217ms. Jul 11 05:22:51.329933 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 05:22:51.333006 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 05:22:51.357123 systemd[1]: Startup finished in 3.289s (kernel) + 7.160s (initrd) + 6.637s (userspace) = 17.087s. Jul 11 05:22:51.463442 systemd[1]: Started sshd@1-10.0.0.87:22-10.0.0.1:53592.service - OpenSSH per-connection server daemon (10.0.0.1:53592). Jul 11 05:22:51.525023 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 53592 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:22:51.527874 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:22:51.534230 systemd-logind[1554]: New session 2 of user core. Jul 11 05:22:51.568055 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 05:22:51.626256 sshd[1702]: Connection closed by 10.0.0.1 port 53592 Jul 11 05:22:51.628471 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jul 11 05:22:51.641716 systemd[1]: sshd@1-10.0.0.87:22-10.0.0.1:53592.service: Deactivated successfully. Jul 11 05:22:51.643726 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 05:22:51.644518 systemd-logind[1554]: Session 2 logged out. Waiting for processes to exit. Jul 11 05:22:51.647517 systemd[1]: Started sshd@2-10.0.0.87:22-10.0.0.1:53608.service - OpenSSH per-connection server daemon (10.0.0.1:53608). Jul 11 05:22:51.648229 systemd-logind[1554]: Removed session 2. Jul 11 05:22:51.696346 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 53608 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:22:51.697997 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:22:51.702565 systemd-logind[1554]: New session 3 of user core. Jul 11 05:22:51.710921 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 05:22:51.760714 sshd[1712]: Connection closed by 10.0.0.1 port 53608 Jul 11 05:22:51.761072 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Jul 11 05:22:51.773505 systemd[1]: sshd@2-10.0.0.87:22-10.0.0.1:53608.service: Deactivated successfully. Jul 11 05:22:51.775491 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 05:22:51.776221 systemd-logind[1554]: Session 3 logged out. Waiting for processes to exit. Jul 11 05:22:51.779544 systemd[1]: Started sshd@3-10.0.0.87:22-10.0.0.1:53616.service - OpenSSH per-connection server daemon (10.0.0.1:53616). Jul 11 05:22:51.780685 systemd-logind[1554]: Removed session 3. Jul 11 05:22:51.855871 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 53616 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:22:51.857547 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:22:51.863691 systemd-logind[1554]: New session 4 of user core. Jul 11 05:22:51.874934 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 05:22:51.882326 kubelet[1684]: E0711 05:22:51.882278 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 05:22:51.886448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 05:22:51.886643 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 05:22:51.887061 systemd[1]: kubelet.service: Consumed 2.188s CPU time, 265M memory peak. Jul 11 05:22:51.930851 sshd[1721]: Connection closed by 10.0.0.1 port 53616 Jul 11 05:22:51.931260 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jul 11 05:22:51.943202 systemd[1]: sshd@3-10.0.0.87:22-10.0.0.1:53616.service: Deactivated successfully. Jul 11 05:22:51.944962 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 05:22:51.945706 systemd-logind[1554]: Session 4 logged out. Waiting for processes to exit. Jul 11 05:22:51.948241 systemd[1]: Started sshd@4-10.0.0.87:22-10.0.0.1:53624.service - OpenSSH per-connection server daemon (10.0.0.1:53624). Jul 11 05:22:51.948824 systemd-logind[1554]: Removed session 4. Jul 11 05:22:52.002797 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 53624 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:22:52.004031 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:22:52.008182 systemd-logind[1554]: New session 5 of user core. Jul 11 05:22:52.021855 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 05:22:52.080032 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 05:22:52.080361 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:22:52.101489 sudo[1732]: pam_unix(sudo:session): session closed for user root Jul 11 05:22:52.103202 sshd[1731]: Connection closed by 10.0.0.1 port 53624 Jul 11 05:22:52.103679 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jul 11 05:22:52.118310 systemd[1]: sshd@4-10.0.0.87:22-10.0.0.1:53624.service: Deactivated successfully. Jul 11 05:22:52.120075 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 05:22:52.120844 systemd-logind[1554]: Session 5 logged out. Waiting for processes to exit. Jul 11 05:22:52.123616 systemd[1]: Started sshd@5-10.0.0.87:22-10.0.0.1:53634.service - OpenSSH per-connection server daemon (10.0.0.1:53634). Jul 11 05:22:52.124157 systemd-logind[1554]: Removed session 5. Jul 11 05:22:52.176618 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 53634 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:22:52.178632 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:22:52.183300 systemd-logind[1554]: New session 6 of user core. Jul 11 05:22:52.194866 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 05:22:52.248429 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 05:22:52.248783 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:22:52.388824 sudo[1743]: pam_unix(sudo:session): session closed for user root Jul 11 05:22:52.396568 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 11 05:22:52.396926 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:22:52.409023 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 05:22:52.462332 augenrules[1765]: No rules Jul 11 05:22:52.464091 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 05:22:52.464386 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 05:22:52.465699 sudo[1742]: pam_unix(sudo:session): session closed for user root Jul 11 05:22:52.467308 sshd[1741]: Connection closed by 10.0.0.1 port 53634 Jul 11 05:22:52.467727 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jul 11 05:22:52.477161 systemd[1]: sshd@5-10.0.0.87:22-10.0.0.1:53634.service: Deactivated successfully. Jul 11 05:22:52.479227 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 05:22:52.480064 systemd-logind[1554]: Session 6 logged out. Waiting for processes to exit. Jul 11 05:22:52.483193 systemd[1]: Started sshd@6-10.0.0.87:22-10.0.0.1:53644.service - OpenSSH per-connection server daemon (10.0.0.1:53644). Jul 11 05:22:52.483824 systemd-logind[1554]: Removed session 6. Jul 11 05:22:52.540563 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 53644 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:22:52.542142 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:22:52.546997 systemd-logind[1554]: New session 7 of user core. Jul 11 05:22:52.556912 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 05:22:52.612540 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 05:22:52.612955 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:22:52.988508 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 05:22:53.007122 (dockerd)[1798]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 05:22:53.785534 dockerd[1798]: time="2025-07-11T05:22:53.785445707Z" level=info msg="Starting up" Jul 11 05:22:53.786279 dockerd[1798]: time="2025-07-11T05:22:53.786245116Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 11 05:22:53.802833 dockerd[1798]: time="2025-07-11T05:22:53.802753085Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 11 05:22:54.712022 dockerd[1798]: time="2025-07-11T05:22:54.711960313Z" level=info msg="Loading containers: start." Jul 11 05:22:54.726766 kernel: Initializing XFRM netlink socket Jul 11 05:22:55.042981 systemd-networkd[1491]: docker0: Link UP Jul 11 05:22:55.048476 dockerd[1798]: time="2025-07-11T05:22:55.048415456Z" level=info msg="Loading containers: done." Jul 11 05:22:55.071833 dockerd[1798]: time="2025-07-11T05:22:55.071761659Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 05:22:55.072035 dockerd[1798]: time="2025-07-11T05:22:55.071875983Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 11 05:22:55.072035 dockerd[1798]: time="2025-07-11T05:22:55.072001669Z" level=info msg="Initializing buildkit" Jul 11 05:22:55.104169 dockerd[1798]: time="2025-07-11T05:22:55.104127927Z" level=info msg="Completed buildkit initialization" Jul 11 05:22:55.110958 dockerd[1798]: time="2025-07-11T05:22:55.110919614Z" level=info msg="Daemon has completed initialization" Jul 11 05:22:55.111065 dockerd[1798]: time="2025-07-11T05:22:55.111015303Z" level=info msg="API listen on /run/docker.sock" Jul 11 05:22:55.111190 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 05:22:56.137107 containerd[1574]: time="2025-07-11T05:22:56.137054633Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 11 05:22:56.767639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1060890727.mount: Deactivated successfully. Jul 11 05:22:58.185158 containerd[1574]: time="2025-07-11T05:22:58.185092936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:22:58.185848 containerd[1574]: time="2025-07-11T05:22:58.185788681Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 11 05:22:58.187043 containerd[1574]: time="2025-07-11T05:22:58.186985886Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:22:58.189489 containerd[1574]: time="2025-07-11T05:22:58.189444367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:22:58.190394 containerd[1574]: time="2025-07-11T05:22:58.190361086Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.053253383s" Jul 11 05:22:58.190436 containerd[1574]: time="2025-07-11T05:22:58.190395581Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 11 05:22:58.191259 containerd[1574]: time="2025-07-11T05:22:58.191219766Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 11 05:22:59.573437 containerd[1574]: time="2025-07-11T05:22:59.573366565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:22:59.574085 containerd[1574]: time="2025-07-11T05:22:59.574021844Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 11 05:22:59.575257 containerd[1574]: time="2025-07-11T05:22:59.575208679Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:22:59.577547 containerd[1574]: time="2025-07-11T05:22:59.577497392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:22:59.578473 containerd[1574]: time="2025-07-11T05:22:59.578427506Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.387177133s" Jul 11 05:22:59.578544 containerd[1574]: time="2025-07-11T05:22:59.578478191Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 11 05:22:59.579074 containerd[1574]: time="2025-07-11T05:22:59.579053820Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 11 05:23:01.789759 containerd[1574]: time="2025-07-11T05:23:01.789671087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:01.790526 containerd[1574]: time="2025-07-11T05:23:01.790504941Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 11 05:23:01.791880 containerd[1574]: time="2025-07-11T05:23:01.791839954Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:01.794437 containerd[1574]: time="2025-07-11T05:23:01.794393964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:01.795404 containerd[1574]: time="2025-07-11T05:23:01.795368131Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 2.216289073s" Jul 11 05:23:01.795438 containerd[1574]: time="2025-07-11T05:23:01.795409268Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 11 05:23:01.795905 containerd[1574]: time="2025-07-11T05:23:01.795851658Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 11 05:23:01.972694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 05:23:01.974503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:23:02.227504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:23:02.231473 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 05:23:02.277941 kubelet[2085]: E0711 05:23:02.277862 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 05:23:02.284495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 05:23:02.284717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 05:23:02.285226 systemd[1]: kubelet.service: Consumed 275ms CPU time, 110.4M memory peak. Jul 11 05:23:03.214331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3193632818.mount: Deactivated successfully. Jul 11 05:23:03.620511 containerd[1574]: time="2025-07-11T05:23:03.620446094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:03.621178 containerd[1574]: time="2025-07-11T05:23:03.621135436Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 11 05:23:03.622324 containerd[1574]: time="2025-07-11T05:23:03.622279412Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:03.624143 containerd[1574]: time="2025-07-11T05:23:03.624083184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:03.624578 containerd[1574]: time="2025-07-11T05:23:03.624543798Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.828662265s" Jul 11 05:23:03.624578 containerd[1574]: time="2025-07-11T05:23:03.624575588Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 11 05:23:03.625121 containerd[1574]: time="2025-07-11T05:23:03.625094311Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 05:23:04.123819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1372858668.mount: Deactivated successfully. Jul 11 05:23:05.493013 containerd[1574]: time="2025-07-11T05:23:05.492945799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:05.493860 containerd[1574]: time="2025-07-11T05:23:05.493837351Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 11 05:23:05.495228 containerd[1574]: time="2025-07-11T05:23:05.495201219Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:05.498225 containerd[1574]: time="2025-07-11T05:23:05.498174835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:05.499407 containerd[1574]: time="2025-07-11T05:23:05.499346342Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.874220202s" Jul 11 05:23:05.499465 containerd[1574]: time="2025-07-11T05:23:05.499407988Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 05:23:05.500064 containerd[1574]: time="2025-07-11T05:23:05.500034803Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 05:23:06.025418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975961129.mount: Deactivated successfully. Jul 11 05:23:06.031018 containerd[1574]: time="2025-07-11T05:23:06.030959026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 05:23:06.031840 containerd[1574]: time="2025-07-11T05:23:06.031813548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 05:23:06.033243 containerd[1574]: time="2025-07-11T05:23:06.033199127Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 05:23:06.036189 containerd[1574]: time="2025-07-11T05:23:06.036158947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 05:23:06.037003 containerd[1574]: time="2025-07-11T05:23:06.036970880Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 536.90046ms" Jul 11 05:23:06.037046 containerd[1574]: time="2025-07-11T05:23:06.037002539Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 05:23:06.037523 containerd[1574]: time="2025-07-11T05:23:06.037484243Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 11 05:23:06.576264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733809557.mount: Deactivated successfully. Jul 11 05:23:08.418310 containerd[1574]: time="2025-07-11T05:23:08.418242354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:08.419083 containerd[1574]: time="2025-07-11T05:23:08.419028699Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 11 05:23:08.420193 containerd[1574]: time="2025-07-11T05:23:08.420146265Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:08.422794 containerd[1574]: time="2025-07-11T05:23:08.422755668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:08.423978 containerd[1574]: time="2025-07-11T05:23:08.423913600Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.386400803s" Jul 11 05:23:08.423978 containerd[1574]: time="2025-07-11T05:23:08.423957292Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 11 05:23:11.079809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:23:11.080023 systemd[1]: kubelet.service: Consumed 275ms CPU time, 110.4M memory peak. Jul 11 05:23:11.082495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:23:11.107448 systemd[1]: Reload requested from client PID 2243 ('systemctl') (unit session-7.scope)... Jul 11 05:23:11.107465 systemd[1]: Reloading... Jul 11 05:23:11.199790 zram_generator::config[2286]: No configuration found. Jul 11 05:23:11.349116 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:23:11.468155 systemd[1]: Reloading finished in 360 ms. Jul 11 05:23:11.533728 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 05:23:11.533848 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 05:23:11.534228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:23:11.534273 systemd[1]: kubelet.service: Consumed 169ms CPU time, 98.3M memory peak. Jul 11 05:23:11.535982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:23:11.746158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:23:11.764138 (kubelet)[2334]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 05:23:11.802548 kubelet[2334]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:23:11.802548 kubelet[2334]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 05:23:11.802548 kubelet[2334]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:23:11.803082 kubelet[2334]: I0711 05:23:11.802584 2334 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 05:23:12.202365 kubelet[2334]: I0711 05:23:12.202164 2334 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 05:23:12.202365 kubelet[2334]: I0711 05:23:12.202223 2334 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 05:23:12.202959 kubelet[2334]: I0711 05:23:12.202919 2334 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 05:23:12.225088 kubelet[2334]: E0711 05:23:12.225032 2334 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:23:12.226148 kubelet[2334]: I0711 05:23:12.226080 2334 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 05:23:12.235727 kubelet[2334]: I0711 05:23:12.235685 2334 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 05:23:12.241206 kubelet[2334]: I0711 05:23:12.241157 2334 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 05:23:12.242746 kubelet[2334]: I0711 05:23:12.242671 2334 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 05:23:12.242993 kubelet[2334]: I0711 05:23:12.242722 2334 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 05:23:12.243117 kubelet[2334]: I0711 05:23:12.243000 2334 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 05:23:12.243117 kubelet[2334]: I0711 05:23:12.243014 2334 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 05:23:12.243228 kubelet[2334]: I0711 05:23:12.243206 2334 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:23:12.246259 kubelet[2334]: I0711 05:23:12.246217 2334 kubelet.go:446] "Attempting to sync node with API server" Jul 11 05:23:12.246259 kubelet[2334]: I0711 05:23:12.246258 2334 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 05:23:12.246326 kubelet[2334]: I0711 05:23:12.246300 2334 kubelet.go:352] "Adding apiserver pod source" Jul 11 05:23:12.246326 kubelet[2334]: I0711 05:23:12.246320 2334 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 05:23:12.249276 kubelet[2334]: W0711 05:23:12.249115 2334 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 11 05:23:12.249276 kubelet[2334]: E0711 05:23:12.249222 2334 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:23:12.249847 kubelet[2334]: W0711 05:23:12.249801 2334 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 11 05:23:12.249899 kubelet[2334]: E0711 05:23:12.249867 2334 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:23:12.251385 kubelet[2334]: I0711 05:23:12.251333 2334 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 05:23:12.252242 kubelet[2334]: I0711 05:23:12.252220 2334 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 05:23:12.253209 kubelet[2334]: W0711 05:23:12.253169 2334 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 05:23:12.255483 kubelet[2334]: I0711 05:23:12.255449 2334 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 05:23:12.255532 kubelet[2334]: I0711 05:23:12.255514 2334 server.go:1287] "Started kubelet" Jul 11 05:23:12.256454 kubelet[2334]: I0711 05:23:12.255753 2334 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 05:23:12.257230 kubelet[2334]: I0711 05:23:12.256692 2334 server.go:479] "Adding debug handlers to kubelet server" Jul 11 05:23:12.259638 kubelet[2334]: I0711 05:23:12.258817 2334 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 05:23:12.259638 kubelet[2334]: I0711 05:23:12.259134 2334 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 05:23:12.259638 kubelet[2334]: I0711 05:23:12.259188 2334 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 05:23:12.259812 kubelet[2334]: E0711 05:23:12.259664 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:12.259812 kubelet[2334]: I0711 05:23:12.259708 2334 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 05:23:12.259812 kubelet[2334]: I0711 05:23:12.259715 2334 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 05:23:12.260364 kubelet[2334]: I0711 05:23:12.260101 2334 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 05:23:12.260364 kubelet[2334]: I0711 05:23:12.260181 2334 reconciler.go:26] "Reconciler: start to sync state" Jul 11 05:23:12.261372 kubelet[2334]: W0711 05:23:12.261238 2334 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 11 05:23:12.261372 kubelet[2334]: E0711 05:23:12.261316 2334 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:23:12.261672 kubelet[2334]: I0711 05:23:12.261587 2334 factory.go:221] Registration of the systemd container factory successfully Jul 11 05:23:12.261751 kubelet[2334]: I0711 05:23:12.261720 2334 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 05:23:12.263031 kubelet[2334]: E0711 05:23:12.262982 2334 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 05:23:12.263825 kubelet[2334]: I0711 05:23:12.263279 2334 factory.go:221] Registration of the containerd container factory successfully Jul 11 05:23:12.263825 kubelet[2334]: E0711 05:23:12.263311 2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="200ms" Jul 11 05:23:12.265339 kubelet[2334]: E0711 05:23:12.264199 2334 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.87:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.87:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18511afa46c7c589 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 05:23:12.255477129 +0000 UTC m=+0.486936931,LastTimestamp:2025-07-11 05:23:12.255477129 +0000 UTC m=+0.486936931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 05:23:12.277453 kubelet[2334]: I0711 05:23:12.277415 2334 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 05:23:12.277453 kubelet[2334]: I0711 05:23:12.277443 2334 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 05:23:12.277453 kubelet[2334]: I0711 05:23:12.277463 2334 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:23:12.277826 kubelet[2334]: I0711 05:23:12.277701 2334 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 05:23:12.279601 kubelet[2334]: I0711 05:23:12.279556 2334 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 05:23:12.279601 kubelet[2334]: I0711 05:23:12.279601 2334 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 05:23:12.279928 kubelet[2334]: I0711 05:23:12.279625 2334 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 05:23:12.279928 kubelet[2334]: I0711 05:23:12.279636 2334 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 05:23:12.279928 kubelet[2334]: E0711 05:23:12.279681 2334 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 05:23:12.360682 kubelet[2334]: E0711 05:23:12.360599 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:12.380136 kubelet[2334]: E0711 05:23:12.380074 2334 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 05:23:12.461699 kubelet[2334]: E0711 05:23:12.461514 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:12.464278 kubelet[2334]: E0711 05:23:12.464238 2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="400ms" Jul 11 05:23:12.562559 kubelet[2334]: E0711 05:23:12.562429 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:12.580708 kubelet[2334]: E0711 05:23:12.580626 2334 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 05:23:12.663217 kubelet[2334]: E0711 05:23:12.663138 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:12.764190 kubelet[2334]: E0711 05:23:12.764111 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:12.791346 kubelet[2334]: I0711 05:23:12.791278 2334 policy_none.go:49] "None policy: Start" Jul 11 05:23:12.791346 kubelet[2334]: I0711 05:23:12.791327 2334 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 05:23:12.791346 kubelet[2334]: I0711 05:23:12.791345 2334 state_mem.go:35] "Initializing new in-memory state store" Jul 11 05:23:12.791708 kubelet[2334]: W0711 05:23:12.791242 2334 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 11 05:23:12.791786 kubelet[2334]: E0711 05:23:12.791761 2334 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:23:12.801658 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 05:23:12.817035 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 05:23:12.820332 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 05:23:12.836615 kubelet[2334]: I0711 05:23:12.836577 2334 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 05:23:12.836953 kubelet[2334]: I0711 05:23:12.836828 2334 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 05:23:12.836953 kubelet[2334]: I0711 05:23:12.836848 2334 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 05:23:12.837222 kubelet[2334]: I0711 05:23:12.837113 2334 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 05:23:12.837873 kubelet[2334]: E0711 05:23:12.837837 2334 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 05:23:12.837923 kubelet[2334]: E0711 05:23:12.837896 2334 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 05:23:12.864886 kubelet[2334]: E0711 05:23:12.864837 2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="800ms" Jul 11 05:23:12.939160 kubelet[2334]: I0711 05:23:12.939118 2334 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:23:12.939634 kubelet[2334]: E0711 05:23:12.939574 2334 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Jul 11 05:23:12.990681 systemd[1]: Created slice kubepods-burstable-pod0d757856eb6e61046f746859d0a9fdff.slice - libcontainer container kubepods-burstable-pod0d757856eb6e61046f746859d0a9fdff.slice. Jul 11 05:23:13.000727 kubelet[2334]: E0711 05:23:13.000689 2334 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:23:13.003917 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 11 05:23:13.005550 kubelet[2334]: E0711 05:23:13.005517 2334 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:23:13.007232 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 11 05:23:13.008831 kubelet[2334]: E0711 05:23:13.008813 2334 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:23:13.065626 kubelet[2334]: I0711 05:23:13.065460 2334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:13.065626 kubelet[2334]: I0711 05:23:13.065511 2334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:13.065626 kubelet[2334]: I0711 05:23:13.065537 2334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:13.065626 kubelet[2334]: I0711 05:23:13.065557 2334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d757856eb6e61046f746859d0a9fdff-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d757856eb6e61046f746859d0a9fdff\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:23:13.065626 kubelet[2334]: I0711 05:23:13.065571 2334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:13.065959 kubelet[2334]: I0711 05:23:13.065585 2334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:13.065959 kubelet[2334]: I0711 05:23:13.065600 2334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 05:23:13.065959 kubelet[2334]: I0711 05:23:13.065619 2334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d757856eb6e61046f746859d0a9fdff-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d757856eb6e61046f746859d0a9fdff\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:23:13.065959 kubelet[2334]: I0711 05:23:13.065657 2334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d757856eb6e61046f746859d0a9fdff-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d757856eb6e61046f746859d0a9fdff\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:23:13.141987 kubelet[2334]: I0711 05:23:13.141941 2334 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:23:13.142411 kubelet[2334]: E0711 05:23:13.142361 2334 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Jul 11 05:23:13.301974 kubelet[2334]: E0711 05:23:13.301935 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:13.302721 containerd[1574]: time="2025-07-11T05:23:13.302501333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d757856eb6e61046f746859d0a9fdff,Namespace:kube-system,Attempt:0,}" Jul 11 05:23:13.306813 kubelet[2334]: E0711 05:23:13.306783 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:13.307218 containerd[1574]: time="2025-07-11T05:23:13.307164148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 11 05:23:13.309394 kubelet[2334]: E0711 05:23:13.309360 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:13.309693 containerd[1574]: time="2025-07-11T05:23:13.309663194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 11 05:23:13.391126 kubelet[2334]: W0711 05:23:13.390980 2334 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 11 05:23:13.391126 kubelet[2334]: E0711 05:23:13.391056 2334 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:23:13.544172 kubelet[2334]: I0711 05:23:13.544134 2334 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:23:13.544543 kubelet[2334]: E0711 05:23:13.544488 2334 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Jul 11 05:23:13.621539 kubelet[2334]: W0711 05:23:13.621486 2334 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 11 05:23:13.621595 kubelet[2334]: E0711 05:23:13.621548 2334 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:23:13.665707 kubelet[2334]: E0711 05:23:13.665582 2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="1.6s" Jul 11 05:23:13.740880 containerd[1574]: time="2025-07-11T05:23:13.740772498Z" level=info msg="connecting to shim f7472f7ce415a6b2cedc62fd66ea5655d2a8238e0bfe259835b15333100cb246" address="unix:///run/containerd/s/28bc39f2643927ba7b0d3e9213fef5d798a04fb4ba8bcecdc4729ea7abbe1c94" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:23:13.770355 containerd[1574]: time="2025-07-11T05:23:13.770294192Z" level=info msg="connecting to shim d539f8420c3174e2ab2e8c860fd0075bae5923d59b40574962faae49a5a5fa3d" address="unix:///run/containerd/s/851ad4e27b80352efeb99a4b892cdb4e52efb132c227bac6bbb8a03336500c3a" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:23:13.772205 containerd[1574]: time="2025-07-11T05:23:13.772157226Z" level=info msg="connecting to shim 63e32bec7717082c430cebcdc741ae2b5b1e890d8ca2d6619a365f1b7ebd4ce8" address="unix:///run/containerd/s/4ea944162604fca084df4fe2ec06c387f36c691599f24bf6788e369fae6c3727" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:23:13.772974 systemd[1]: Started cri-containerd-f7472f7ce415a6b2cedc62fd66ea5655d2a8238e0bfe259835b15333100cb246.scope - libcontainer container f7472f7ce415a6b2cedc62fd66ea5655d2a8238e0bfe259835b15333100cb246. Jul 11 05:23:13.795196 kubelet[2334]: W0711 05:23:13.795121 2334 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 11 05:23:13.795196 kubelet[2334]: E0711 05:23:13.795204 2334 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:23:13.809917 systemd[1]: Started cri-containerd-63e32bec7717082c430cebcdc741ae2b5b1e890d8ca2d6619a365f1b7ebd4ce8.scope - libcontainer container 63e32bec7717082c430cebcdc741ae2b5b1e890d8ca2d6619a365f1b7ebd4ce8. Jul 11 05:23:13.811945 systemd[1]: Started cri-containerd-d539f8420c3174e2ab2e8c860fd0075bae5923d59b40574962faae49a5a5fa3d.scope - libcontainer container d539f8420c3174e2ab2e8c860fd0075bae5923d59b40574962faae49a5a5fa3d. Jul 11 05:23:13.835990 containerd[1574]: time="2025-07-11T05:23:13.835922699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d757856eb6e61046f746859d0a9fdff,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7472f7ce415a6b2cedc62fd66ea5655d2a8238e0bfe259835b15333100cb246\"" Jul 11 05:23:13.837767 kubelet[2334]: E0711 05:23:13.837679 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:13.841168 containerd[1574]: time="2025-07-11T05:23:13.841131697Z" level=info msg="CreateContainer within sandbox \"f7472f7ce415a6b2cedc62fd66ea5655d2a8238e0bfe259835b15333100cb246\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 05:23:13.850671 containerd[1574]: time="2025-07-11T05:23:13.850627004Z" level=info msg="Container 9d96ac452102f3b6b5d56c652cbcc9ff8637774d832d1330a41073844e012c0f: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:23:13.861257 containerd[1574]: time="2025-07-11T05:23:13.861201766Z" level=info msg="CreateContainer within sandbox \"f7472f7ce415a6b2cedc62fd66ea5655d2a8238e0bfe259835b15333100cb246\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9d96ac452102f3b6b5d56c652cbcc9ff8637774d832d1330a41073844e012c0f\"" Jul 11 05:23:13.863343 containerd[1574]: time="2025-07-11T05:23:13.863252351Z" level=info msg="StartContainer for \"9d96ac452102f3b6b5d56c652cbcc9ff8637774d832d1330a41073844e012c0f\"" Jul 11 05:23:13.864935 containerd[1574]: time="2025-07-11T05:23:13.864897136Z" level=info msg="connecting to shim 9d96ac452102f3b6b5d56c652cbcc9ff8637774d832d1330a41073844e012c0f" address="unix:///run/containerd/s/28bc39f2643927ba7b0d3e9213fef5d798a04fb4ba8bcecdc4729ea7abbe1c94" protocol=ttrpc version=3 Jul 11 05:23:13.873514 containerd[1574]: time="2025-07-11T05:23:13.873413137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"d539f8420c3174e2ab2e8c860fd0075bae5923d59b40574962faae49a5a5fa3d\"" Jul 11 05:23:13.874423 kubelet[2334]: E0711 05:23:13.874392 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:13.876727 containerd[1574]: time="2025-07-11T05:23:13.876693739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"63e32bec7717082c430cebcdc741ae2b5b1e890d8ca2d6619a365f1b7ebd4ce8\"" Jul 11 05:23:13.877688 kubelet[2334]: E0711 05:23:13.877657 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:13.878697 containerd[1574]: time="2025-07-11T05:23:13.878491821Z" level=info msg="CreateContainer within sandbox \"d539f8420c3174e2ab2e8c860fd0075bae5923d59b40574962faae49a5a5fa3d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 05:23:13.880724 containerd[1574]: time="2025-07-11T05:23:13.880669044Z" level=info msg="CreateContainer within sandbox \"63e32bec7717082c430cebcdc741ae2b5b1e890d8ca2d6619a365f1b7ebd4ce8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 05:23:13.892349 containerd[1574]: time="2025-07-11T05:23:13.892307059Z" level=info msg="Container fba0b4bf84a0f1da139df4879d55d4b67a2967c5afa045f848b27ed79ea79415: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:23:13.895602 containerd[1574]: time="2025-07-11T05:23:13.894992696Z" level=info msg="Container 14341622c5b46d0728f002b90ea82b0e6521422abde8e6af7502c990bc6a5446: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:23:13.899894 systemd[1]: Started cri-containerd-9d96ac452102f3b6b5d56c652cbcc9ff8637774d832d1330a41073844e012c0f.scope - libcontainer container 9d96ac452102f3b6b5d56c652cbcc9ff8637774d832d1330a41073844e012c0f. Jul 11 05:23:13.905518 containerd[1574]: time="2025-07-11T05:23:13.905428497Z" level=info msg="CreateContainer within sandbox \"63e32bec7717082c430cebcdc741ae2b5b1e890d8ca2d6619a365f1b7ebd4ce8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"14341622c5b46d0728f002b90ea82b0e6521422abde8e6af7502c990bc6a5446\"" Jul 11 05:23:13.906553 containerd[1574]: time="2025-07-11T05:23:13.906521076Z" level=info msg="StartContainer for \"14341622c5b46d0728f002b90ea82b0e6521422abde8e6af7502c990bc6a5446\"" Jul 11 05:23:13.907984 containerd[1574]: time="2025-07-11T05:23:13.907883330Z" level=info msg="CreateContainer within sandbox \"d539f8420c3174e2ab2e8c860fd0075bae5923d59b40574962faae49a5a5fa3d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fba0b4bf84a0f1da139df4879d55d4b67a2967c5afa045f848b27ed79ea79415\"" Jul 11 05:23:13.908103 containerd[1574]: time="2025-07-11T05:23:13.908051566Z" level=info msg="connecting to shim 14341622c5b46d0728f002b90ea82b0e6521422abde8e6af7502c990bc6a5446" address="unix:///run/containerd/s/4ea944162604fca084df4fe2ec06c387f36c691599f24bf6788e369fae6c3727" protocol=ttrpc version=3 Jul 11 05:23:13.908548 containerd[1574]: time="2025-07-11T05:23:13.908499065Z" level=info msg="StartContainer for \"fba0b4bf84a0f1da139df4879d55d4b67a2967c5afa045f848b27ed79ea79415\"" Jul 11 05:23:13.910005 containerd[1574]: time="2025-07-11T05:23:13.909922004Z" level=info msg="connecting to shim fba0b4bf84a0f1da139df4879d55d4b67a2967c5afa045f848b27ed79ea79415" address="unix:///run/containerd/s/851ad4e27b80352efeb99a4b892cdb4e52efb132c227bac6bbb8a03336500c3a" protocol=ttrpc version=3 Jul 11 05:23:13.935021 systemd[1]: Started cri-containerd-14341622c5b46d0728f002b90ea82b0e6521422abde8e6af7502c990bc6a5446.scope - libcontainer container 14341622c5b46d0728f002b90ea82b0e6521422abde8e6af7502c990bc6a5446. Jul 11 05:23:13.940905 systemd[1]: Started cri-containerd-fba0b4bf84a0f1da139df4879d55d4b67a2967c5afa045f848b27ed79ea79415.scope - libcontainer container fba0b4bf84a0f1da139df4879d55d4b67a2967c5afa045f848b27ed79ea79415. Jul 11 05:23:13.967187 containerd[1574]: time="2025-07-11T05:23:13.967124729Z" level=info msg="StartContainer for \"9d96ac452102f3b6b5d56c652cbcc9ff8637774d832d1330a41073844e012c0f\" returns successfully" Jul 11 05:23:14.012179 containerd[1574]: time="2025-07-11T05:23:14.012103370Z" level=info msg="StartContainer for \"14341622c5b46d0728f002b90ea82b0e6521422abde8e6af7502c990bc6a5446\" returns successfully" Jul 11 05:23:14.021142 containerd[1574]: time="2025-07-11T05:23:14.021079904Z" level=info msg="StartContainer for \"fba0b4bf84a0f1da139df4879d55d4b67a2967c5afa045f848b27ed79ea79415\" returns successfully" Jul 11 05:23:14.294996 kubelet[2334]: E0711 05:23:14.294939 2334 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:23:14.295186 kubelet[2334]: E0711 05:23:14.295098 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:14.297565 kubelet[2334]: E0711 05:23:14.297462 2334 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:23:14.297855 kubelet[2334]: E0711 05:23:14.297765 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:14.300367 kubelet[2334]: E0711 05:23:14.300332 2334 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:23:14.300548 kubelet[2334]: E0711 05:23:14.300523 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:14.346962 kubelet[2334]: I0711 05:23:14.346297 2334 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:23:15.301758 kubelet[2334]: E0711 05:23:15.301716 2334 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:23:15.302220 kubelet[2334]: E0711 05:23:15.301881 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:15.302874 kubelet[2334]: E0711 05:23:15.302852 2334 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:23:15.303019 kubelet[2334]: E0711 05:23:15.302996 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:15.313203 kubelet[2334]: I0711 05:23:15.313144 2334 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 05:23:15.313591 kubelet[2334]: E0711 05:23:15.313290 2334 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 05:23:15.331760 kubelet[2334]: E0711 05:23:15.331280 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:15.431660 kubelet[2334]: E0711 05:23:15.431593 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:15.532553 kubelet[2334]: E0711 05:23:15.532485 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:15.633545 kubelet[2334]: E0711 05:23:15.633387 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:15.734516 kubelet[2334]: E0711 05:23:15.734440 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:15.835625 kubelet[2334]: E0711 05:23:15.835546 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:15.936702 kubelet[2334]: E0711 05:23:15.936546 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:16.037340 kubelet[2334]: E0711 05:23:16.037275 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:16.138093 kubelet[2334]: E0711 05:23:16.138048 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:16.238952 kubelet[2334]: E0711 05:23:16.238898 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:16.339346 kubelet[2334]: E0711 05:23:16.339292 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:16.440057 kubelet[2334]: E0711 05:23:16.440000 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:16.540417 kubelet[2334]: E0711 05:23:16.540249 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:16.641147 kubelet[2334]: E0711 05:23:16.641079 2334 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:23:16.763140 kubelet[2334]: I0711 05:23:16.763081 2334 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 05:23:16.771562 kubelet[2334]: I0711 05:23:16.771533 2334 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:16.775385 kubelet[2334]: I0711 05:23:16.775339 2334 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 05:23:17.191109 systemd[1]: Reload requested from client PID 2613 ('systemctl') (unit session-7.scope)... Jul 11 05:23:17.191126 systemd[1]: Reloading... Jul 11 05:23:17.251419 kubelet[2334]: I0711 05:23:17.251367 2334 apiserver.go:52] "Watching apiserver" Jul 11 05:23:17.256582 kubelet[2334]: E0711 05:23:17.256483 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:17.257016 kubelet[2334]: E0711 05:23:17.256988 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:17.257589 kubelet[2334]: E0711 05:23:17.257565 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:17.260892 kubelet[2334]: I0711 05:23:17.260813 2334 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 05:23:17.277812 zram_generator::config[2656]: No configuration found. Jul 11 05:23:17.371049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:23:17.504944 systemd[1]: Reloading finished in 313 ms. Jul 11 05:23:17.535527 kubelet[2334]: I0711 05:23:17.535477 2334 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 05:23:17.535557 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:23:17.560087 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 05:23:17.560440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:23:17.560493 systemd[1]: kubelet.service: Consumed 988ms CPU time, 132M memory peak. Jul 11 05:23:17.562310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:23:17.775943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:23:17.780167 (kubelet)[2701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 05:23:18.034148 kubelet[2701]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:23:18.034148 kubelet[2701]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 05:23:18.034148 kubelet[2701]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:23:18.034148 kubelet[2701]: I0711 05:23:18.034101 2701 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 05:23:18.042529 kubelet[2701]: I0711 05:23:18.042486 2701 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 05:23:18.042529 kubelet[2701]: I0711 05:23:18.042516 2701 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 05:23:18.042807 kubelet[2701]: I0711 05:23:18.042790 2701 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 05:23:18.043970 kubelet[2701]: I0711 05:23:18.043946 2701 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 05:23:18.046876 kubelet[2701]: I0711 05:23:18.046668 2701 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 05:23:18.050918 kubelet[2701]: I0711 05:23:18.050890 2701 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 05:23:18.056757 kubelet[2701]: I0711 05:23:18.056211 2701 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 05:23:18.056757 kubelet[2701]: I0711 05:23:18.056427 2701 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 05:23:18.056757 kubelet[2701]: I0711 05:23:18.056463 2701 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 05:23:18.056757 kubelet[2701]: I0711 05:23:18.056696 2701 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 05:23:18.057079 kubelet[2701]: I0711 05:23:18.056704 2701 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 05:23:18.057079 kubelet[2701]: I0711 05:23:18.056797 2701 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:23:18.057079 kubelet[2701]: I0711 05:23:18.056959 2701 kubelet.go:446] "Attempting to sync node with API server" Jul 11 05:23:18.057079 kubelet[2701]: I0711 05:23:18.056985 2701 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 05:23:18.057079 kubelet[2701]: I0711 05:23:18.057016 2701 kubelet.go:352] "Adding apiserver pod source" Jul 11 05:23:18.057079 kubelet[2701]: I0711 05:23:18.057029 2701 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 05:23:18.058548 kubelet[2701]: I0711 05:23:18.058517 2701 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 05:23:18.059579 kubelet[2701]: I0711 05:23:18.059539 2701 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 05:23:18.060125 kubelet[2701]: I0711 05:23:18.060098 2701 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 05:23:18.060177 kubelet[2701]: I0711 05:23:18.060140 2701 server.go:1287] "Started kubelet" Jul 11 05:23:18.061017 kubelet[2701]: I0711 05:23:18.060964 2701 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 05:23:18.061097 kubelet[2701]: I0711 05:23:18.061004 2701 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 05:23:18.062045 kubelet[2701]: I0711 05:23:18.062024 2701 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 05:23:18.063752 kubelet[2701]: I0711 05:23:18.063707 2701 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 05:23:18.069306 kubelet[2701]: I0711 05:23:18.068997 2701 server.go:479] "Adding debug handlers to kubelet server" Jul 11 05:23:18.070471 kubelet[2701]: E0711 05:23:18.070448 2701 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 05:23:18.071182 kubelet[2701]: I0711 05:23:18.071160 2701 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 05:23:18.072647 kubelet[2701]: I0711 05:23:18.072618 2701 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 05:23:18.074232 kubelet[2701]: I0711 05:23:18.073857 2701 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 05:23:18.074358 kubelet[2701]: I0711 05:23:18.074340 2701 reconciler.go:26] "Reconciler: start to sync state" Jul 11 05:23:18.075675 kubelet[2701]: I0711 05:23:18.075370 2701 factory.go:221] Registration of the systemd container factory successfully Jul 11 05:23:18.075675 kubelet[2701]: I0711 05:23:18.075494 2701 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 05:23:18.076575 kubelet[2701]: I0711 05:23:18.076561 2701 factory.go:221] Registration of the containerd container factory successfully Jul 11 05:23:18.078361 kubelet[2701]: I0711 05:23:18.078286 2701 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 05:23:18.079610 kubelet[2701]: I0711 05:23:18.079578 2701 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 05:23:18.079610 kubelet[2701]: I0711 05:23:18.079610 2701 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 05:23:18.079802 kubelet[2701]: I0711 05:23:18.079635 2701 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 05:23:18.079802 kubelet[2701]: I0711 05:23:18.079646 2701 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 05:23:18.079802 kubelet[2701]: E0711 05:23:18.079704 2701 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 05:23:18.110570 kubelet[2701]: I0711 05:23:18.110534 2701 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 05:23:18.110808 kubelet[2701]: I0711 05:23:18.110769 2701 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 05:23:18.110808 kubelet[2701]: I0711 05:23:18.110797 2701 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:23:18.111015 kubelet[2701]: I0711 05:23:18.110982 2701 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 05:23:18.111015 kubelet[2701]: I0711 05:23:18.110999 2701 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 05:23:18.111092 kubelet[2701]: I0711 05:23:18.111021 2701 policy_none.go:49] "None policy: Start" Jul 11 05:23:18.111092 kubelet[2701]: I0711 05:23:18.111032 2701 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 05:23:18.111092 kubelet[2701]: I0711 05:23:18.111047 2701 state_mem.go:35] "Initializing new in-memory state store" Jul 11 05:23:18.111203 kubelet[2701]: I0711 05:23:18.111181 2701 state_mem.go:75] "Updated machine memory state" Jul 11 05:23:18.115954 kubelet[2701]: I0711 05:23:18.115920 2701 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 05:23:18.116188 kubelet[2701]: I0711 05:23:18.116158 2701 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 05:23:18.116243 kubelet[2701]: I0711 05:23:18.116183 2701 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 05:23:18.117427 kubelet[2701]: I0711 05:23:18.117344 2701 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 05:23:18.118106 kubelet[2701]: E0711 05:23:18.118061 2701 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 05:23:18.181171 kubelet[2701]: I0711 05:23:18.180927 2701 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:18.181171 kubelet[2701]: I0711 05:23:18.180969 2701 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 05:23:18.181171 kubelet[2701]: I0711 05:23:18.181008 2701 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 05:23:18.186281 kubelet[2701]: E0711 05:23:18.186234 2701 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 05:23:18.186457 kubelet[2701]: E0711 05:23:18.186235 2701 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 05:23:18.186891 kubelet[2701]: E0711 05:23:18.186852 2701 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:18.192359 sudo[2737]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 11 05:23:18.192699 sudo[2737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 11 05:23:18.224333 kubelet[2701]: I0711 05:23:18.224294 2701 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:23:18.232108 kubelet[2701]: I0711 05:23:18.232073 2701 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 05:23:18.232198 kubelet[2701]: I0711 05:23:18.232159 2701 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 05:23:18.276296 kubelet[2701]: I0711 05:23:18.276247 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:18.276296 kubelet[2701]: I0711 05:23:18.276288 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:18.276520 kubelet[2701]: I0711 05:23:18.276315 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:18.276520 kubelet[2701]: I0711 05:23:18.276335 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d757856eb6e61046f746859d0a9fdff-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d757856eb6e61046f746859d0a9fdff\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:23:18.276520 kubelet[2701]: I0711 05:23:18.276351 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d757856eb6e61046f746859d0a9fdff-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d757856eb6e61046f746859d0a9fdff\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:23:18.276520 kubelet[2701]: I0711 05:23:18.276367 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d757856eb6e61046f746859d0a9fdff-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d757856eb6e61046f746859d0a9fdff\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:23:18.276520 kubelet[2701]: I0711 05:23:18.276383 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:18.276684 kubelet[2701]: I0711 05:23:18.276415 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:23:18.276684 kubelet[2701]: I0711 05:23:18.276434 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 05:23:18.488523 kubelet[2701]: E0711 05:23:18.487625 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:18.488523 kubelet[2701]: E0711 05:23:18.487721 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:18.488523 kubelet[2701]: E0711 05:23:18.487877 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:18.497489 sudo[2737]: pam_unix(sudo:session): session closed for user root Jul 11 05:23:19.058568 kubelet[2701]: I0711 05:23:19.058509 2701 apiserver.go:52] "Watching apiserver" Jul 11 05:23:19.074891 kubelet[2701]: I0711 05:23:19.074841 2701 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 05:23:19.097059 kubelet[2701]: E0711 05:23:19.096996 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:19.097059 kubelet[2701]: E0711 05:23:19.097062 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:19.097877 kubelet[2701]: E0711 05:23:19.097205 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:19.166767 kubelet[2701]: I0711 05:23:19.166647 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.166627513 podStartE2EDuration="3.166627513s" podCreationTimestamp="2025-07-11 05:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:23:19.165165371 +0000 UTC m=+1.381102311" watchObservedRunningTime="2025-07-11 05:23:19.166627513 +0000 UTC m=+1.382564433" Jul 11 05:23:19.179893 kubelet[2701]: I0711 05:23:19.179825 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.179803493 podStartE2EDuration="3.179803493s" podCreationTimestamp="2025-07-11 05:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:23:19.171870215 +0000 UTC m=+1.387807125" watchObservedRunningTime="2025-07-11 05:23:19.179803493 +0000 UTC m=+1.395740413" Jul 11 05:23:19.187279 kubelet[2701]: I0711 05:23:19.187204 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.187183172 podStartE2EDuration="3.187183172s" podCreationTimestamp="2025-07-11 05:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:23:19.180019908 +0000 UTC m=+1.395956829" watchObservedRunningTime="2025-07-11 05:23:19.187183172 +0000 UTC m=+1.403120092" Jul 11 05:23:19.803619 sudo[1778]: pam_unix(sudo:session): session closed for user root Jul 11 05:23:19.805364 sshd[1777]: Connection closed by 10.0.0.1 port 53644 Jul 11 05:23:19.805955 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Jul 11 05:23:19.810990 systemd[1]: sshd@6-10.0.0.87:22-10.0.0.1:53644.service: Deactivated successfully. Jul 11 05:23:19.813759 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 05:23:19.814025 systemd[1]: session-7.scope: Consumed 4.873s CPU time, 264.2M memory peak. Jul 11 05:23:19.815641 systemd-logind[1554]: Session 7 logged out. Waiting for processes to exit. Jul 11 05:23:19.817804 systemd-logind[1554]: Removed session 7. Jul 11 05:23:20.098234 kubelet[2701]: E0711 05:23:20.098073 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:20.098722 kubelet[2701]: E0711 05:23:20.098417 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:21.099029 kubelet[2701]: E0711 05:23:21.098981 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:22.964311 kubelet[2701]: I0711 05:23:22.964254 2701 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 05:23:22.965127 kubelet[2701]: I0711 05:23:22.965045 2701 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 05:23:22.965171 containerd[1574]: time="2025-07-11T05:23:22.964821805Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 05:23:23.453238 kubelet[2701]: E0711 05:23:23.453156 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:23.652346 systemd[1]: Created slice kubepods-besteffort-pod97133135_b529_481a_b9c6_3bb0d504fcf9.slice - libcontainer container kubepods-besteffort-pod97133135_b529_481a_b9c6_3bb0d504fcf9.slice. Jul 11 05:23:23.664178 systemd[1]: Created slice kubepods-burstable-pod18f616e0_e8f7_4e47_b3fd_f2fd14382f5a.slice - libcontainer container kubepods-burstable-pod18f616e0_e8f7_4e47_b3fd_f2fd14382f5a.slice. Jul 11 05:23:23.692803 kubelet[2701]: E0711 05:23:23.692751 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:23.708444 kubelet[2701]: I0711 05:23:23.708331 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-clustermesh-secrets\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708444 kubelet[2701]: I0711 05:23:23.708370 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97133135-b529-481a-b9c6-3bb0d504fcf9-lib-modules\") pod \"kube-proxy-4wgn7\" (UID: \"97133135-b529-481a-b9c6-3bb0d504fcf9\") " pod="kube-system/kube-proxy-4wgn7" Jul 11 05:23:23.708444 kubelet[2701]: I0711 05:23:23.708396 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cilium-run\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708444 kubelet[2701]: I0711 05:23:23.708414 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-etc-cni-netd\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708444 kubelet[2701]: I0711 05:23:23.708436 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-bpf-maps\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708597 kubelet[2701]: I0711 05:23:23.708464 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-hostproc\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708597 kubelet[2701]: I0711 05:23:23.708504 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cilium-cgroup\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708597 kubelet[2701]: I0711 05:23:23.708520 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cilium-config-path\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708597 kubelet[2701]: I0711 05:23:23.708540 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-lib-modules\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708597 kubelet[2701]: I0711 05:23:23.708568 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psbqp\" (UniqueName: \"kubernetes.io/projected/97133135-b529-481a-b9c6-3bb0d504fcf9-kube-api-access-psbqp\") pod \"kube-proxy-4wgn7\" (UID: \"97133135-b529-481a-b9c6-3bb0d504fcf9\") " pod="kube-system/kube-proxy-4wgn7" Jul 11 05:23:23.708597 kubelet[2701]: I0711 05:23:23.708584 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97133135-b529-481a-b9c6-3bb0d504fcf9-xtables-lock\") pod \"kube-proxy-4wgn7\" (UID: \"97133135-b529-481a-b9c6-3bb0d504fcf9\") " pod="kube-system/kube-proxy-4wgn7" Jul 11 05:23:23.708787 kubelet[2701]: I0711 05:23:23.708599 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-hubble-tls\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708787 kubelet[2701]: I0711 05:23:23.708621 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cni-path\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708787 kubelet[2701]: I0711 05:23:23.708635 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-host-proc-sys-kernel\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708787 kubelet[2701]: I0711 05:23:23.708650 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdc9b\" (UniqueName: \"kubernetes.io/projected/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-kube-api-access-fdc9b\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708787 kubelet[2701]: I0711 05:23:23.708664 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-host-proc-sys-net\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708787 kubelet[2701]: I0711 05:23:23.708684 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-xtables-lock\") pod \"cilium-qwx76\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " pod="kube-system/cilium-qwx76" Jul 11 05:23:23.708996 kubelet[2701]: I0711 05:23:23.708701 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/97133135-b529-481a-b9c6-3bb0d504fcf9-kube-proxy\") pod \"kube-proxy-4wgn7\" (UID: \"97133135-b529-481a-b9c6-3bb0d504fcf9\") " pod="kube-system/kube-proxy-4wgn7" Jul 11 05:23:23.962631 kubelet[2701]: E0711 05:23:23.962490 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:23.963485 containerd[1574]: time="2025-07-11T05:23:23.963405808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4wgn7,Uid:97133135-b529-481a-b9c6-3bb0d504fcf9,Namespace:kube-system,Attempt:0,}" Jul 11 05:23:23.967972 kubelet[2701]: E0711 05:23:23.967929 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:23.968567 containerd[1574]: time="2025-07-11T05:23:23.968513341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwx76,Uid:18f616e0-e8f7-4e47-b3fd-f2fd14382f5a,Namespace:kube-system,Attempt:0,}" Jul 11 05:23:24.132246 containerd[1574]: time="2025-07-11T05:23:24.132142852Z" level=info msg="connecting to shim e4243048a26643c1740f6deea9562351a15b6696d42ff4183fb9a6cdbe631fc5" address="unix:///run/containerd/s/c5906dfb320581ba917cd2f2a56f15034fa22c1a2cb65bf961aa0eba56260a12" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:23:24.132536 containerd[1574]: time="2025-07-11T05:23:24.132172097Z" level=info msg="connecting to shim 64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f" address="unix:///run/containerd/s/fe90709298188405b822fda3f8d591b5a5b1c894c2b136aab3f4c2e8e1ec5a81" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:23:24.139611 systemd[1]: Created slice kubepods-besteffort-pod8e46fd3c_980e_4011_8b17_19a738d40c89.slice - libcontainer container kubepods-besteffort-pod8e46fd3c_980e_4011_8b17_19a738d40c89.slice. Jul 11 05:23:24.170948 systemd[1]: Started cri-containerd-64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f.scope - libcontainer container 64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f. Jul 11 05:23:24.173115 systemd[1]: Started cri-containerd-e4243048a26643c1740f6deea9562351a15b6696d42ff4183fb9a6cdbe631fc5.scope - libcontainer container e4243048a26643c1740f6deea9562351a15b6696d42ff4183fb9a6cdbe631fc5. Jul 11 05:23:24.207892 containerd[1574]: time="2025-07-11T05:23:24.207574491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwx76,Uid:18f616e0-e8f7-4e47-b3fd-f2fd14382f5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\"" Jul 11 05:23:24.209381 kubelet[2701]: E0711 05:23:24.209352 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:24.211055 kubelet[2701]: I0711 05:23:24.210934 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e46fd3c-980e-4011-8b17-19a738d40c89-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8r4nt\" (UID: \"8e46fd3c-980e-4011-8b17-19a738d40c89\") " pod="kube-system/cilium-operator-6c4d7847fc-8r4nt" Jul 11 05:23:24.212266 containerd[1574]: time="2025-07-11T05:23:24.212224622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4wgn7,Uid:97133135-b529-481a-b9c6-3bb0d504fcf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4243048a26643c1740f6deea9562351a15b6696d42ff4183fb9a6cdbe631fc5\"" Jul 11 05:23:24.212487 kubelet[2701]: I0711 05:23:24.212457 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4zv9\" (UniqueName: \"kubernetes.io/projected/8e46fd3c-980e-4011-8b17-19a738d40c89-kube-api-access-g4zv9\") pod \"cilium-operator-6c4d7847fc-8r4nt\" (UID: \"8e46fd3c-980e-4011-8b17-19a738d40c89\") " pod="kube-system/cilium-operator-6c4d7847fc-8r4nt" Jul 11 05:23:24.214554 containerd[1574]: time="2025-07-11T05:23:24.213157481Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 05:23:24.216708 kubelet[2701]: E0711 05:23:24.216522 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:24.221271 containerd[1574]: time="2025-07-11T05:23:24.221217185Z" level=info msg="CreateContainer within sandbox \"e4243048a26643c1740f6deea9562351a15b6696d42ff4183fb9a6cdbe631fc5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 05:23:24.238413 containerd[1574]: time="2025-07-11T05:23:24.238349536Z" level=info msg="Container 6f4318dbce3fc838a1ebc0f6ecad12b32c4485b8d09bd7b8bf93668ba4d61d28: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:23:24.247871 containerd[1574]: time="2025-07-11T05:23:24.247804942Z" level=info msg="CreateContainer within sandbox \"e4243048a26643c1740f6deea9562351a15b6696d42ff4183fb9a6cdbe631fc5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6f4318dbce3fc838a1ebc0f6ecad12b32c4485b8d09bd7b8bf93668ba4d61d28\"" Jul 11 05:23:24.249655 containerd[1574]: time="2025-07-11T05:23:24.249584938Z" level=info msg="StartContainer for \"6f4318dbce3fc838a1ebc0f6ecad12b32c4485b8d09bd7b8bf93668ba4d61d28\"" Jul 11 05:23:24.251296 containerd[1574]: time="2025-07-11T05:23:24.251271155Z" level=info msg="connecting to shim 6f4318dbce3fc838a1ebc0f6ecad12b32c4485b8d09bd7b8bf93668ba4d61d28" address="unix:///run/containerd/s/c5906dfb320581ba917cd2f2a56f15034fa22c1a2cb65bf961aa0eba56260a12" protocol=ttrpc version=3 Jul 11 05:23:24.283090 systemd[1]: Started cri-containerd-6f4318dbce3fc838a1ebc0f6ecad12b32c4485b8d09bd7b8bf93668ba4d61d28.scope - libcontainer container 6f4318dbce3fc838a1ebc0f6ecad12b32c4485b8d09bd7b8bf93668ba4d61d28. Jul 11 05:23:24.335329 containerd[1574]: time="2025-07-11T05:23:24.335221194Z" level=info msg="StartContainer for \"6f4318dbce3fc838a1ebc0f6ecad12b32c4485b8d09bd7b8bf93668ba4d61d28\" returns successfully" Jul 11 05:23:24.445362 kubelet[2701]: E0711 05:23:24.445303 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:24.446868 containerd[1574]: time="2025-07-11T05:23:24.446549177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8r4nt,Uid:8e46fd3c-980e-4011-8b17-19a738d40c89,Namespace:kube-system,Attempt:0,}" Jul 11 05:23:24.491876 containerd[1574]: time="2025-07-11T05:23:24.491819161Z" level=info msg="connecting to shim 9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5" address="unix:///run/containerd/s/a239fda2913bd3ccab005131475e1e5e5d39f98e5428fb35ceff1ac62d148821" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:23:24.529021 systemd[1]: Started cri-containerd-9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5.scope - libcontainer container 9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5. Jul 11 05:23:24.575365 containerd[1574]: time="2025-07-11T05:23:24.575307189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8r4nt,Uid:8e46fd3c-980e-4011-8b17-19a738d40c89,Namespace:kube-system,Attempt:0,} returns sandbox id \"9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5\"" Jul 11 05:23:24.576100 kubelet[2701]: E0711 05:23:24.576071 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:25.113826 kubelet[2701]: E0711 05:23:25.113710 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:25.124418 kubelet[2701]: I0711 05:23:25.124319 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4wgn7" podStartSLOduration=2.124283806 podStartE2EDuration="2.124283806s" podCreationTimestamp="2025-07-11 05:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:23:25.123582368 +0000 UTC m=+7.339519308" watchObservedRunningTime="2025-07-11 05:23:25.124283806 +0000 UTC m=+7.340220736" Jul 11 05:23:29.559891 kubelet[2701]: E0711 05:23:29.559843 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:30.121811 kubelet[2701]: E0711 05:23:30.121764 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:33.278375 update_engine[1559]: I20250711 05:23:33.278292 1559 update_attempter.cc:509] Updating boot flags... Jul 11 05:23:33.462757 kubelet[2701]: E0711 05:23:33.459939 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:33.698268 kubelet[2701]: E0711 05:23:33.698088 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:34.127120 kubelet[2701]: E0711 05:23:34.127079 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:34.478048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount196558114.mount: Deactivated successfully. Jul 11 05:23:37.047119 containerd[1574]: time="2025-07-11T05:23:37.047052353Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:37.048052 containerd[1574]: time="2025-07-11T05:23:37.047993822Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 11 05:23:37.049112 containerd[1574]: time="2025-07-11T05:23:37.049074483Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:37.050529 containerd[1574]: time="2025-07-11T05:23:37.050477294Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.837257804s" Jul 11 05:23:37.050529 containerd[1574]: time="2025-07-11T05:23:37.050512050Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 11 05:23:37.051581 containerd[1574]: time="2025-07-11T05:23:37.051520686Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 05:23:37.052640 containerd[1574]: time="2025-07-11T05:23:37.052611837Z" level=info msg="CreateContainer within sandbox \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 05:23:37.063190 containerd[1574]: time="2025-07-11T05:23:37.063135749Z" level=info msg="Container 909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:23:37.066883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187257516.mount: Deactivated successfully. Jul 11 05:23:37.069063 containerd[1574]: time="2025-07-11T05:23:37.069020518Z" level=info msg="CreateContainer within sandbox \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791\"" Jul 11 05:23:37.069570 containerd[1574]: time="2025-07-11T05:23:37.069487329Z" level=info msg="StartContainer for \"909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791\"" Jul 11 05:23:37.070415 containerd[1574]: time="2025-07-11T05:23:37.070347775Z" level=info msg="connecting to shim 909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791" address="unix:///run/containerd/s/fe90709298188405b822fda3f8d591b5a5b1c894c2b136aab3f4c2e8e1ec5a81" protocol=ttrpc version=3 Jul 11 05:23:37.116880 systemd[1]: Started cri-containerd-909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791.scope - libcontainer container 909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791. Jul 11 05:23:37.150144 containerd[1574]: time="2025-07-11T05:23:37.150096662Z" level=info msg="StartContainer for \"909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791\" returns successfully" Jul 11 05:23:37.162529 systemd[1]: cri-containerd-909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791.scope: Deactivated successfully. Jul 11 05:23:37.165660 containerd[1574]: time="2025-07-11T05:23:37.165624899Z" level=info msg="received exit event container_id:\"909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791\" id:\"909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791\" pid:3139 exited_at:{seconds:1752211417 nanos:165141716}" Jul 11 05:23:37.165822 containerd[1574]: time="2025-07-11T05:23:37.165779220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791\" id:\"909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791\" pid:3139 exited_at:{seconds:1752211417 nanos:165141716}" Jul 11 05:23:37.188156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791-rootfs.mount: Deactivated successfully. Jul 11 05:23:38.138291 kubelet[2701]: E0711 05:23:38.138219 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:38.141528 containerd[1574]: time="2025-07-11T05:23:38.141038758Z" level=info msg="CreateContainer within sandbox \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 05:23:38.151452 containerd[1574]: time="2025-07-11T05:23:38.151406135Z" level=info msg="Container de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:23:38.156206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3486275275.mount: Deactivated successfully. Jul 11 05:23:38.167132 containerd[1574]: time="2025-07-11T05:23:38.167087267Z" level=info msg="CreateContainer within sandbox \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336\"" Jul 11 05:23:38.167493 containerd[1574]: time="2025-07-11T05:23:38.167462857Z" level=info msg="StartContainer for \"de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336\"" Jul 11 05:23:38.168364 containerd[1574]: time="2025-07-11T05:23:38.168297152Z" level=info msg="connecting to shim de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336" address="unix:///run/containerd/s/fe90709298188405b822fda3f8d591b5a5b1c894c2b136aab3f4c2e8e1ec5a81" protocol=ttrpc version=3 Jul 11 05:23:38.188885 systemd[1]: Started cri-containerd-de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336.scope - libcontainer container de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336. Jul 11 05:23:38.220451 containerd[1574]: time="2025-07-11T05:23:38.220404782Z" level=info msg="StartContainer for \"de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336\" returns successfully" Jul 11 05:23:38.235123 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 05:23:38.235365 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:23:38.235641 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 11 05:23:38.237953 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 05:23:38.239997 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 05:23:38.240900 containerd[1574]: time="2025-07-11T05:23:38.240574210Z" level=info msg="received exit event container_id:\"de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336\" id:\"de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336\" pid:3186 exited_at:{seconds:1752211418 nanos:240289081}" Jul 11 05:23:38.240900 containerd[1574]: time="2025-07-11T05:23:38.240705207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336\" id:\"de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336\" pid:3186 exited_at:{seconds:1752211418 nanos:240289081}" Jul 11 05:23:38.240721 systemd[1]: cri-containerd-de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336.scope: Deactivated successfully. Jul 11 05:23:38.262442 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:23:39.145147 kubelet[2701]: E0711 05:23:39.144998 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:39.148952 containerd[1574]: time="2025-07-11T05:23:39.148720773Z" level=info msg="CreateContainer within sandbox \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 05:23:39.155251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336-rootfs.mount: Deactivated successfully. Jul 11 05:23:39.182236 containerd[1574]: time="2025-07-11T05:23:39.182072157Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:39.182728 containerd[1574]: time="2025-07-11T05:23:39.182698418Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 11 05:23:39.184486 containerd[1574]: time="2025-07-11T05:23:39.184444575Z" level=info msg="Container ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:23:39.188692 containerd[1574]: time="2025-07-11T05:23:39.188430088Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:39.198204 containerd[1574]: time="2025-07-11T05:23:39.198162368Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.146595926s" Jul 11 05:23:39.198204 containerd[1574]: time="2025-07-11T05:23:39.198197504Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 11 05:23:39.203342 containerd[1574]: time="2025-07-11T05:23:39.203290899Z" level=info msg="CreateContainer within sandbox \"9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 05:23:39.212382 containerd[1574]: time="2025-07-11T05:23:39.212271170Z" level=info msg="CreateContainer within sandbox \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03\"" Jul 11 05:23:39.215171 containerd[1574]: time="2025-07-11T05:23:39.215114045Z" level=info msg="StartContainer for \"ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03\"" Jul 11 05:23:39.219025 containerd[1574]: time="2025-07-11T05:23:39.218949986Z" level=info msg="connecting to shim ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03" address="unix:///run/containerd/s/fe90709298188405b822fda3f8d591b5a5b1c894c2b136aab3f4c2e8e1ec5a81" protocol=ttrpc version=3 Jul 11 05:23:39.223341 containerd[1574]: time="2025-07-11T05:23:39.223255844Z" level=info msg="Container f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:23:39.231765 containerd[1574]: time="2025-07-11T05:23:39.231698160Z" level=info msg="CreateContainer within sandbox \"9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\"" Jul 11 05:23:39.233085 containerd[1574]: time="2025-07-11T05:23:39.232420493Z" level=info msg="StartContainer for \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\"" Jul 11 05:23:39.233953 containerd[1574]: time="2025-07-11T05:23:39.233892431Z" level=info msg="connecting to shim f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e" address="unix:///run/containerd/s/a239fda2913bd3ccab005131475e1e5e5d39f98e5428fb35ceff1ac62d148821" protocol=ttrpc version=3 Jul 11 05:23:39.259949 systemd[1]: Started cri-containerd-ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03.scope - libcontainer container ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03. Jul 11 05:23:39.271888 systemd[1]: Started cri-containerd-f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e.scope - libcontainer container f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e. Jul 11 05:23:39.365651 containerd[1574]: time="2025-07-11T05:23:39.365064239Z" level=info msg="StartContainer for \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\" returns successfully" Jul 11 05:23:39.381275 containerd[1574]: time="2025-07-11T05:23:39.381180861Z" level=info msg="StartContainer for \"ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03\" returns successfully" Jul 11 05:23:39.423238 systemd[1]: cri-containerd-ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03.scope: Deactivated successfully. Jul 11 05:23:39.436260 containerd[1574]: time="2025-07-11T05:23:39.436196698Z" level=info msg="received exit event container_id:\"ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03\" id:\"ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03\" pid:3263 exited_at:{seconds:1752211419 nanos:435812763}" Jul 11 05:23:39.436593 containerd[1574]: time="2025-07-11T05:23:39.436534225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03\" id:\"ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03\" pid:3263 exited_at:{seconds:1752211419 nanos:435812763}" Jul 11 05:23:40.148096 kubelet[2701]: E0711 05:23:40.148049 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:40.152018 kubelet[2701]: E0711 05:23:40.151970 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:40.153945 containerd[1574]: time="2025-07-11T05:23:40.153896078Z" level=info msg="CreateContainer within sandbox \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 05:23:40.154103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03-rootfs.mount: Deactivated successfully. Jul 11 05:23:40.165455 containerd[1574]: time="2025-07-11T05:23:40.165403772Z" level=info msg="Container 2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:23:40.173298 containerd[1574]: time="2025-07-11T05:23:40.173246320Z" level=info msg="CreateContainer within sandbox \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3\"" Jul 11 05:23:40.173827 containerd[1574]: time="2025-07-11T05:23:40.173784906Z" level=info msg="StartContainer for \"2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3\"" Jul 11 05:23:40.174831 containerd[1574]: time="2025-07-11T05:23:40.174805291Z" level=info msg="connecting to shim 2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3" address="unix:///run/containerd/s/fe90709298188405b822fda3f8d591b5a5b1c894c2b136aab3f4c2e8e1ec5a81" protocol=ttrpc version=3 Jul 11 05:23:40.200888 systemd[1]: Started cri-containerd-2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3.scope - libcontainer container 2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3. Jul 11 05:23:40.253037 systemd[1]: cri-containerd-2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3.scope: Deactivated successfully. Jul 11 05:23:40.253873 containerd[1574]: time="2025-07-11T05:23:40.253824375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3\" id:\"2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3\" pid:3326 exited_at:{seconds:1752211420 nanos:253352876}" Jul 11 05:23:40.282762 containerd[1574]: time="2025-07-11T05:23:40.282651076Z" level=info msg="received exit event container_id:\"2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3\" id:\"2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3\" pid:3326 exited_at:{seconds:1752211420 nanos:253352876}" Jul 11 05:23:40.293316 containerd[1574]: time="2025-07-11T05:23:40.293227693Z" level=info msg="StartContainer for \"2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3\" returns successfully" Jul 11 05:23:40.317131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3-rootfs.mount: Deactivated successfully. Jul 11 05:23:40.380953 kubelet[2701]: I0711 05:23:40.380726 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8r4nt" podStartSLOduration=1.7564696990000002 podStartE2EDuration="16.380691831s" podCreationTimestamp="2025-07-11 05:23:24 +0000 UTC" firstStartedPulling="2025-07-11 05:23:24.576652806 +0000 UTC m=+6.792589726" lastFinishedPulling="2025-07-11 05:23:39.200874938 +0000 UTC m=+21.416811858" observedRunningTime="2025-07-11 05:23:40.340897447 +0000 UTC m=+22.556834367" watchObservedRunningTime="2025-07-11 05:23:40.380691831 +0000 UTC m=+22.596628751" Jul 11 05:23:41.158271 kubelet[2701]: E0711 05:23:41.158078 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:41.158271 kubelet[2701]: E0711 05:23:41.158130 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:41.160095 containerd[1574]: time="2025-07-11T05:23:41.159992499Z" level=info msg="CreateContainer within sandbox \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 05:23:41.185415 containerd[1574]: time="2025-07-11T05:23:41.184995348Z" level=info msg="Container c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:23:41.193985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566795475.mount: Deactivated successfully. Jul 11 05:23:41.198239 containerd[1574]: time="2025-07-11T05:23:41.198180045Z" level=info msg="CreateContainer within sandbox \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\"" Jul 11 05:23:41.201922 containerd[1574]: time="2025-07-11T05:23:41.201879903Z" level=info msg="StartContainer for \"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\"" Jul 11 05:23:41.204427 containerd[1574]: time="2025-07-11T05:23:41.204362465Z" level=info msg="connecting to shim c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6" address="unix:///run/containerd/s/fe90709298188405b822fda3f8d591b5a5b1c894c2b136aab3f4c2e8e1ec5a81" protocol=ttrpc version=3 Jul 11 05:23:41.236880 systemd[1]: Started cri-containerd-c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6.scope - libcontainer container c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6. Jul 11 05:23:41.436873 containerd[1574]: time="2025-07-11T05:23:41.436473601Z" level=info msg="StartContainer for \"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\" returns successfully" Jul 11 05:23:41.691131 containerd[1574]: time="2025-07-11T05:23:41.690959418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\" id:\"eab3bcf2201684502a5189779af158441646856cf417f7d106f1b509a31c3833\" pid:3405 exited_at:{seconds:1752211421 nanos:690457070}" Jul 11 05:23:41.756720 kubelet[2701]: I0711 05:23:41.756657 2701 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 05:23:41.795207 systemd[1]: Created slice kubepods-burstable-pod6b3faa07_a7ac_4380_9f0f_17cc34576686.slice - libcontainer container kubepods-burstable-pod6b3faa07_a7ac_4380_9f0f_17cc34576686.slice. Jul 11 05:23:41.809470 systemd[1]: Created slice kubepods-burstable-podfc3eacaf_1a2e_42f8_aaf7_94fb17e26ba3.slice - libcontainer container kubepods-burstable-podfc3eacaf_1a2e_42f8_aaf7_94fb17e26ba3.slice. Jul 11 05:23:41.835769 kubelet[2701]: I0711 05:23:41.835687 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ksnh\" (UniqueName: \"kubernetes.io/projected/6b3faa07-a7ac-4380-9f0f-17cc34576686-kube-api-access-6ksnh\") pod \"coredns-668d6bf9bc-4rnbp\" (UID: \"6b3faa07-a7ac-4380-9f0f-17cc34576686\") " pod="kube-system/coredns-668d6bf9bc-4rnbp" Jul 11 05:23:41.835970 kubelet[2701]: I0711 05:23:41.835820 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc3eacaf-1a2e-42f8-aaf7-94fb17e26ba3-config-volume\") pod \"coredns-668d6bf9bc-dlns5\" (UID: \"fc3eacaf-1a2e-42f8-aaf7-94fb17e26ba3\") " pod="kube-system/coredns-668d6bf9bc-dlns5" Jul 11 05:23:41.835970 kubelet[2701]: I0711 05:23:41.835844 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6kd2\" (UniqueName: \"kubernetes.io/projected/fc3eacaf-1a2e-42f8-aaf7-94fb17e26ba3-kube-api-access-l6kd2\") pod \"coredns-668d6bf9bc-dlns5\" (UID: \"fc3eacaf-1a2e-42f8-aaf7-94fb17e26ba3\") " pod="kube-system/coredns-668d6bf9bc-dlns5" Jul 11 05:23:41.836060 kubelet[2701]: I0711 05:23:41.835989 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b3faa07-a7ac-4380-9f0f-17cc34576686-config-volume\") pod \"coredns-668d6bf9bc-4rnbp\" (UID: \"6b3faa07-a7ac-4380-9f0f-17cc34576686\") " pod="kube-system/coredns-668d6bf9bc-4rnbp" Jul 11 05:23:42.100005 kubelet[2701]: E0711 05:23:42.099953 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:42.101091 containerd[1574]: time="2025-07-11T05:23:42.101037109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4rnbp,Uid:6b3faa07-a7ac-4380-9f0f-17cc34576686,Namespace:kube-system,Attempt:0,}" Jul 11 05:23:42.116605 kubelet[2701]: E0711 05:23:42.116552 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:42.117111 containerd[1574]: time="2025-07-11T05:23:42.117031766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dlns5,Uid:fc3eacaf-1a2e-42f8-aaf7-94fb17e26ba3,Namespace:kube-system,Attempt:0,}" Jul 11 05:23:42.190034 kubelet[2701]: E0711 05:23:42.187718 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:42.206088 kubelet[2701]: I0711 05:23:42.206010 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qwx76" podStartSLOduration=6.367236448 podStartE2EDuration="19.205977677s" podCreationTimestamp="2025-07-11 05:23:23 +0000 UTC" firstStartedPulling="2025-07-11 05:23:24.212553228 +0000 UTC m=+6.428490148" lastFinishedPulling="2025-07-11 05:23:37.051294457 +0000 UTC m=+19.267231377" observedRunningTime="2025-07-11 05:23:42.205853864 +0000 UTC m=+24.421790784" watchObservedRunningTime="2025-07-11 05:23:42.205977677 +0000 UTC m=+24.421914598" Jul 11 05:23:43.185726 systemd[1]: Started sshd@7-10.0.0.87:22-10.0.0.1:54726.service - OpenSSH per-connection server daemon (10.0.0.1:54726). Jul 11 05:23:43.191073 kubelet[2701]: E0711 05:23:43.190991 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:43.245236 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 54726 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:43.246771 sshd-session[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:43.254911 systemd-logind[1554]: New session 8 of user core. Jul 11 05:23:43.262895 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 05:23:43.437013 sshd[3496]: Connection closed by 10.0.0.1 port 54726 Jul 11 05:23:43.436566 sshd-session[3493]: pam_unix(sshd:session): session closed for user core Jul 11 05:23:43.445199 systemd[1]: sshd@7-10.0.0.87:22-10.0.0.1:54726.service: Deactivated successfully. Jul 11 05:23:43.447554 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 05:23:43.448395 systemd-logind[1554]: Session 8 logged out. Waiting for processes to exit. Jul 11 05:23:43.449647 systemd-logind[1554]: Removed session 8. Jul 11 05:23:43.685940 systemd-networkd[1491]: cilium_host: Link UP Jul 11 05:23:43.686173 systemd-networkd[1491]: cilium_net: Link UP Jul 11 05:23:43.686413 systemd-networkd[1491]: cilium_net: Gained carrier Jul 11 05:23:43.686647 systemd-networkd[1491]: cilium_host: Gained carrier Jul 11 05:23:43.797068 systemd-networkd[1491]: cilium_vxlan: Link UP Jul 11 05:23:43.797081 systemd-networkd[1491]: cilium_vxlan: Gained carrier Jul 11 05:23:44.027771 kernel: NET: Registered PF_ALG protocol family Jul 11 05:23:44.193312 kubelet[2701]: E0711 05:23:44.193271 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:44.413034 systemd-networkd[1491]: cilium_net: Gained IPv6LL Jul 11 05:23:44.668928 systemd-networkd[1491]: cilium_host: Gained IPv6LL Jul 11 05:23:44.753587 systemd-networkd[1491]: lxc_health: Link UP Jul 11 05:23:44.766108 systemd-networkd[1491]: lxc_health: Gained carrier Jul 11 05:23:44.989060 systemd-networkd[1491]: cilium_vxlan: Gained IPv6LL Jul 11 05:23:45.168744 systemd-networkd[1491]: lxce11e225267a8: Link UP Jul 11 05:23:45.170760 kernel: eth0: renamed from tmpa7671 Jul 11 05:23:45.172244 systemd-networkd[1491]: lxce11e225267a8: Gained carrier Jul 11 05:23:45.179050 systemd-networkd[1491]: lxc58d7d1ac4c80: Link UP Jul 11 05:23:45.188762 kernel: eth0: renamed from tmp7b88f Jul 11 05:23:45.194131 systemd-networkd[1491]: lxc58d7d1ac4c80: Gained carrier Jul 11 05:23:45.970237 kubelet[2701]: E0711 05:23:45.970184 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:46.199266 kubelet[2701]: E0711 05:23:46.199219 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:46.845049 systemd-networkd[1491]: lxc_health: Gained IPv6LL Jul 11 05:23:46.972965 systemd-networkd[1491]: lxc58d7d1ac4c80: Gained IPv6LL Jul 11 05:23:47.101029 systemd-networkd[1491]: lxce11e225267a8: Gained IPv6LL Jul 11 05:23:47.200469 kubelet[2701]: E0711 05:23:47.200427 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:48.450967 systemd[1]: Started sshd@8-10.0.0.87:22-10.0.0.1:54736.service - OpenSSH per-connection server daemon (10.0.0.1:54736). Jul 11 05:23:48.513764 sshd[3896]: Accepted publickey for core from 10.0.0.1 port 54736 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:48.514986 sshd-session[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:48.519631 systemd-logind[1554]: New session 9 of user core. Jul 11 05:23:48.528856 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 05:23:48.683345 containerd[1574]: time="2025-07-11T05:23:48.683072199Z" level=info msg="connecting to shim 7b88f997e07c5fb68fd8a83a7186f183c8bf7a73d8923105a28d6fedc68c6d6c" address="unix:///run/containerd/s/843be4a543644c795406c3f26be3be7dfff5bee64dc4a505c0e5c111e248ddbb" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:23:48.683969 containerd[1574]: time="2025-07-11T05:23:48.683694740Z" level=info msg="connecting to shim a76716a6c021e7e7c5611b722c6d4c808f65d4c4320f46d7250f46893aa00335" address="unix:///run/containerd/s/25644ecb12dbb962d84a4ee7fbee0002176fb3ff325e94a4748fe3eb473af7dc" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:23:48.689171 sshd[3902]: Connection closed by 10.0.0.1 port 54736 Jul 11 05:23:48.691369 sshd-session[3896]: pam_unix(sshd:session): session closed for user core Jul 11 05:23:48.699054 systemd[1]: sshd@8-10.0.0.87:22-10.0.0.1:54736.service: Deactivated successfully. Jul 11 05:23:48.702286 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 05:23:48.704651 systemd-logind[1554]: Session 9 logged out. Waiting for processes to exit. Jul 11 05:23:48.706914 systemd-logind[1554]: Removed session 9. Jul 11 05:23:48.714906 systemd[1]: Started cri-containerd-a76716a6c021e7e7c5611b722c6d4c808f65d4c4320f46d7250f46893aa00335.scope - libcontainer container a76716a6c021e7e7c5611b722c6d4c808f65d4c4320f46d7250f46893aa00335. Jul 11 05:23:48.718158 systemd[1]: Started cri-containerd-7b88f997e07c5fb68fd8a83a7186f183c8bf7a73d8923105a28d6fedc68c6d6c.scope - libcontainer container 7b88f997e07c5fb68fd8a83a7186f183c8bf7a73d8923105a28d6fedc68c6d6c. Jul 11 05:23:48.731222 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:23:48.732576 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:23:48.764223 containerd[1574]: time="2025-07-11T05:23:48.764166554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4rnbp,Uid:6b3faa07-a7ac-4380-9f0f-17cc34576686,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b88f997e07c5fb68fd8a83a7186f183c8bf7a73d8923105a28d6fedc68c6d6c\"" Jul 11 05:23:48.764936 kubelet[2701]: E0711 05:23:48.764912 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:48.767399 containerd[1574]: time="2025-07-11T05:23:48.767362690Z" level=info msg="CreateContainer within sandbox \"7b88f997e07c5fb68fd8a83a7186f183c8bf7a73d8923105a28d6fedc68c6d6c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 05:23:48.767555 containerd[1574]: time="2025-07-11T05:23:48.767375394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dlns5,Uid:fc3eacaf-1a2e-42f8-aaf7-94fb17e26ba3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a76716a6c021e7e7c5611b722c6d4c808f65d4c4320f46d7250f46893aa00335\"" Jul 11 05:23:48.768395 kubelet[2701]: E0711 05:23:48.768338 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:48.781157 containerd[1574]: time="2025-07-11T05:23:48.781125274Z" level=info msg="CreateContainer within sandbox \"a76716a6c021e7e7c5611b722c6d4c808f65d4c4320f46d7250f46893aa00335\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 05:23:48.788905 containerd[1574]: time="2025-07-11T05:23:48.788845415Z" level=info msg="Container 195d1edb277ed782d41816ed6110a96cb98668e68139dab2217b9434e6aadf78: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:23:48.792609 containerd[1574]: time="2025-07-11T05:23:48.792567751Z" level=info msg="Container 9a48b916bba1176738f5fb5bf245dd8992918968cf3ed24b532fea1e7ef95203: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:23:49.198348 containerd[1574]: time="2025-07-11T05:23:49.198281703Z" level=info msg="CreateContainer within sandbox \"a76716a6c021e7e7c5611b722c6d4c808f65d4c4320f46d7250f46893aa00335\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a48b916bba1176738f5fb5bf245dd8992918968cf3ed24b532fea1e7ef95203\"" Jul 11 05:23:49.199687 containerd[1574]: time="2025-07-11T05:23:49.198958697Z" level=info msg="StartContainer for \"9a48b916bba1176738f5fb5bf245dd8992918968cf3ed24b532fea1e7ef95203\"" Jul 11 05:23:49.202855 containerd[1574]: time="2025-07-11T05:23:49.200695816Z" level=info msg="connecting to shim 9a48b916bba1176738f5fb5bf245dd8992918968cf3ed24b532fea1e7ef95203" address="unix:///run/containerd/s/25644ecb12dbb962d84a4ee7fbee0002176fb3ff325e94a4748fe3eb473af7dc" protocol=ttrpc version=3 Jul 11 05:23:49.236887 systemd[1]: Started cri-containerd-9a48b916bba1176738f5fb5bf245dd8992918968cf3ed24b532fea1e7ef95203.scope - libcontainer container 9a48b916bba1176738f5fb5bf245dd8992918968cf3ed24b532fea1e7ef95203. Jul 11 05:23:49.459424 containerd[1574]: time="2025-07-11T05:23:49.459276986Z" level=info msg="CreateContainer within sandbox \"7b88f997e07c5fb68fd8a83a7186f183c8bf7a73d8923105a28d6fedc68c6d6c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"195d1edb277ed782d41816ed6110a96cb98668e68139dab2217b9434e6aadf78\"" Jul 11 05:23:49.459641 containerd[1574]: time="2025-07-11T05:23:49.459425596Z" level=info msg="StartContainer for \"9a48b916bba1176738f5fb5bf245dd8992918968cf3ed24b532fea1e7ef95203\" returns successfully" Jul 11 05:23:49.461530 containerd[1574]: time="2025-07-11T05:23:49.461281258Z" level=info msg="StartContainer for \"195d1edb277ed782d41816ed6110a96cb98668e68139dab2217b9434e6aadf78\"" Jul 11 05:23:49.462404 containerd[1574]: time="2025-07-11T05:23:49.462373593Z" level=info msg="connecting to shim 195d1edb277ed782d41816ed6110a96cb98668e68139dab2217b9434e6aadf78" address="unix:///run/containerd/s/843be4a543644c795406c3f26be3be7dfff5bee64dc4a505c0e5c111e248ddbb" protocol=ttrpc version=3 Jul 11 05:23:49.497990 systemd[1]: Started cri-containerd-195d1edb277ed782d41816ed6110a96cb98668e68139dab2217b9434e6aadf78.scope - libcontainer container 195d1edb277ed782d41816ed6110a96cb98668e68139dab2217b9434e6aadf78. Jul 11 05:23:49.536572 containerd[1574]: time="2025-07-11T05:23:49.536502527Z" level=info msg="StartContainer for \"195d1edb277ed782d41816ed6110a96cb98668e68139dab2217b9434e6aadf78\" returns successfully" Jul 11 05:23:49.675803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2419898564.mount: Deactivated successfully. Jul 11 05:23:50.211548 kubelet[2701]: E0711 05:23:50.211475 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:50.214938 kubelet[2701]: E0711 05:23:50.214355 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:50.222898 kubelet[2701]: I0711 05:23:50.222550 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4rnbp" podStartSLOduration=26.222530154 podStartE2EDuration="26.222530154s" podCreationTimestamp="2025-07-11 05:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:23:50.222034271 +0000 UTC m=+32.437971211" watchObservedRunningTime="2025-07-11 05:23:50.222530154 +0000 UTC m=+32.438467084" Jul 11 05:23:51.216555 kubelet[2701]: E0711 05:23:51.216507 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:51.217000 kubelet[2701]: E0711 05:23:51.216507 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:52.217575 kubelet[2701]: E0711 05:23:52.217540 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:52.218051 kubelet[2701]: E0711 05:23:52.217627 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:23:53.704795 systemd[1]: Started sshd@9-10.0.0.87:22-10.0.0.1:39096.service - OpenSSH per-connection server daemon (10.0.0.1:39096). Jul 11 05:23:53.754129 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 39096 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:53.755587 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:53.760149 systemd-logind[1554]: New session 10 of user core. Jul 11 05:23:53.769911 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 05:23:53.901038 sshd[4087]: Connection closed by 10.0.0.1 port 39096 Jul 11 05:23:53.901386 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Jul 11 05:23:53.906309 systemd[1]: sshd@9-10.0.0.87:22-10.0.0.1:39096.service: Deactivated successfully. Jul 11 05:23:53.909697 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 05:23:53.911167 systemd-logind[1554]: Session 10 logged out. Waiting for processes to exit. Jul 11 05:23:53.912522 systemd-logind[1554]: Removed session 10. Jul 11 05:23:58.915449 systemd[1]: Started sshd@10-10.0.0.87:22-10.0.0.1:39106.service - OpenSSH per-connection server daemon (10.0.0.1:39106). Jul 11 05:23:58.971869 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 39106 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:58.973294 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:58.977353 systemd-logind[1554]: New session 11 of user core. Jul 11 05:23:58.986849 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 05:23:59.112207 sshd[4108]: Connection closed by 10.0.0.1 port 39106 Jul 11 05:23:59.112937 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jul 11 05:23:59.129418 systemd[1]: sshd@10-10.0.0.87:22-10.0.0.1:39106.service: Deactivated successfully. Jul 11 05:23:59.131495 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 05:23:59.132317 systemd-logind[1554]: Session 11 logged out. Waiting for processes to exit. Jul 11 05:23:59.136009 systemd[1]: Started sshd@11-10.0.0.87:22-10.0.0.1:39110.service - OpenSSH per-connection server daemon (10.0.0.1:39110). Jul 11 05:23:59.137023 systemd-logind[1554]: Removed session 11. Jul 11 05:23:59.197897 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 39110 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:59.199366 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:59.203687 systemd-logind[1554]: New session 12 of user core. Jul 11 05:23:59.210862 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 05:23:59.361041 sshd[4126]: Connection closed by 10.0.0.1 port 39110 Jul 11 05:23:59.362922 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Jul 11 05:23:59.374417 systemd[1]: sshd@11-10.0.0.87:22-10.0.0.1:39110.service: Deactivated successfully. Jul 11 05:23:59.377345 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 05:23:59.379942 systemd-logind[1554]: Session 12 logged out. Waiting for processes to exit. Jul 11 05:23:59.383903 systemd[1]: Started sshd@12-10.0.0.87:22-10.0.0.1:39112.service - OpenSSH per-connection server daemon (10.0.0.1:39112). Jul 11 05:23:59.385109 systemd-logind[1554]: Removed session 12. Jul 11 05:23:59.443650 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 39112 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:59.445493 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:59.450491 systemd-logind[1554]: New session 13 of user core. Jul 11 05:23:59.460921 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 05:23:59.578716 sshd[4141]: Connection closed by 10.0.0.1 port 39112 Jul 11 05:23:59.579118 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Jul 11 05:23:59.583908 systemd[1]: sshd@12-10.0.0.87:22-10.0.0.1:39112.service: Deactivated successfully. Jul 11 05:23:59.586216 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 05:23:59.587037 systemd-logind[1554]: Session 13 logged out. Waiting for processes to exit. Jul 11 05:23:59.588487 systemd-logind[1554]: Removed session 13. Jul 11 05:24:04.595794 systemd[1]: Started sshd@13-10.0.0.87:22-10.0.0.1:50632.service - OpenSSH per-connection server daemon (10.0.0.1:50632). Jul 11 05:24:04.641319 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 50632 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:04.642885 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:04.647315 systemd-logind[1554]: New session 14 of user core. Jul 11 05:24:04.656875 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 05:24:04.775720 sshd[4157]: Connection closed by 10.0.0.1 port 50632 Jul 11 05:24:04.776105 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:04.781209 systemd[1]: sshd@13-10.0.0.87:22-10.0.0.1:50632.service: Deactivated successfully. Jul 11 05:24:04.783357 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 05:24:04.784240 systemd-logind[1554]: Session 14 logged out. Waiting for processes to exit. Jul 11 05:24:04.785332 systemd-logind[1554]: Removed session 14. Jul 11 05:24:09.791622 systemd[1]: Started sshd@14-10.0.0.87:22-10.0.0.1:60532.service - OpenSSH per-connection server daemon (10.0.0.1:60532). Jul 11 05:24:09.855447 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 60532 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:09.857233 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:09.861917 systemd-logind[1554]: New session 15 of user core. Jul 11 05:24:09.870943 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 05:24:09.988290 sshd[4174]: Connection closed by 10.0.0.1 port 60532 Jul 11 05:24:09.988692 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:10.002346 systemd[1]: sshd@14-10.0.0.87:22-10.0.0.1:60532.service: Deactivated successfully. Jul 11 05:24:10.004170 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 05:24:10.005188 systemd-logind[1554]: Session 15 logged out. Waiting for processes to exit. Jul 11 05:24:10.008329 systemd[1]: Started sshd@15-10.0.0.87:22-10.0.0.1:60538.service - OpenSSH per-connection server daemon (10.0.0.1:60538). Jul 11 05:24:10.009285 systemd-logind[1554]: Removed session 15. Jul 11 05:24:10.059302 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 60538 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:10.060901 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:10.065503 systemd-logind[1554]: New session 16 of user core. Jul 11 05:24:10.071866 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 05:24:10.256372 sshd[4191]: Connection closed by 10.0.0.1 port 60538 Jul 11 05:24:10.256914 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:10.267427 systemd[1]: sshd@15-10.0.0.87:22-10.0.0.1:60538.service: Deactivated successfully. Jul 11 05:24:10.269505 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 05:24:10.270401 systemd-logind[1554]: Session 16 logged out. Waiting for processes to exit. Jul 11 05:24:10.273382 systemd[1]: Started sshd@16-10.0.0.87:22-10.0.0.1:60546.service - OpenSSH per-connection server daemon (10.0.0.1:60546). Jul 11 05:24:10.274394 systemd-logind[1554]: Removed session 16. Jul 11 05:24:10.341808 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 60546 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:10.343340 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:10.348272 systemd-logind[1554]: New session 17 of user core. Jul 11 05:24:10.358893 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 05:24:11.088534 sshd[4205]: Connection closed by 10.0.0.1 port 60546 Jul 11 05:24:11.090981 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:11.099954 systemd[1]: sshd@16-10.0.0.87:22-10.0.0.1:60546.service: Deactivated successfully. Jul 11 05:24:11.102483 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 05:24:11.103412 systemd-logind[1554]: Session 17 logged out. Waiting for processes to exit. Jul 11 05:24:11.107143 systemd[1]: Started sshd@17-10.0.0.87:22-10.0.0.1:60554.service - OpenSSH per-connection server daemon (10.0.0.1:60554). Jul 11 05:24:11.107987 systemd-logind[1554]: Removed session 17. Jul 11 05:24:11.168063 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 60554 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:11.169862 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:11.174432 systemd-logind[1554]: New session 18 of user core. Jul 11 05:24:11.186855 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 05:24:11.402327 sshd[4227]: Connection closed by 10.0.0.1 port 60554 Jul 11 05:24:11.402839 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:11.412692 systemd[1]: sshd@17-10.0.0.87:22-10.0.0.1:60554.service: Deactivated successfully. Jul 11 05:24:11.415120 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 05:24:11.416947 systemd-logind[1554]: Session 18 logged out. Waiting for processes to exit. Jul 11 05:24:11.419975 systemd[1]: Started sshd@18-10.0.0.87:22-10.0.0.1:60558.service - OpenSSH per-connection server daemon (10.0.0.1:60558). Jul 11 05:24:11.420948 systemd-logind[1554]: Removed session 18. Jul 11 05:24:11.479164 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 60558 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:11.480825 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:11.485721 systemd-logind[1554]: New session 19 of user core. Jul 11 05:24:11.498916 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 05:24:11.609391 sshd[4241]: Connection closed by 10.0.0.1 port 60558 Jul 11 05:24:11.609830 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:11.614508 systemd[1]: sshd@18-10.0.0.87:22-10.0.0.1:60558.service: Deactivated successfully. Jul 11 05:24:11.616777 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 05:24:11.617666 systemd-logind[1554]: Session 19 logged out. Waiting for processes to exit. Jul 11 05:24:11.619259 systemd-logind[1554]: Removed session 19. Jul 11 05:24:16.631203 systemd[1]: Started sshd@19-10.0.0.87:22-10.0.0.1:60564.service - OpenSSH per-connection server daemon (10.0.0.1:60564). Jul 11 05:24:16.697160 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 60564 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:16.698798 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:16.703389 systemd-logind[1554]: New session 20 of user core. Jul 11 05:24:16.712923 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 05:24:16.831975 sshd[4257]: Connection closed by 10.0.0.1 port 60564 Jul 11 05:24:16.832342 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:16.837417 systemd[1]: sshd@19-10.0.0.87:22-10.0.0.1:60564.service: Deactivated successfully. Jul 11 05:24:16.839684 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 05:24:16.840597 systemd-logind[1554]: Session 20 logged out. Waiting for processes to exit. Jul 11 05:24:16.842311 systemd-logind[1554]: Removed session 20. Jul 11 05:24:21.844766 systemd[1]: Started sshd@20-10.0.0.87:22-10.0.0.1:55932.service - OpenSSH per-connection server daemon (10.0.0.1:55932). Jul 11 05:24:21.901467 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 55932 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:21.903204 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:21.908286 systemd-logind[1554]: New session 21 of user core. Jul 11 05:24:21.922967 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 05:24:22.035874 sshd[4277]: Connection closed by 10.0.0.1 port 55932 Jul 11 05:24:22.036262 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:22.041477 systemd[1]: sshd@20-10.0.0.87:22-10.0.0.1:55932.service: Deactivated successfully. Jul 11 05:24:22.043922 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 05:24:22.045631 systemd-logind[1554]: Session 21 logged out. Waiting for processes to exit. Jul 11 05:24:22.047280 systemd-logind[1554]: Removed session 21. Jul 11 05:24:27.049529 systemd[1]: Started sshd@21-10.0.0.87:22-10.0.0.1:55934.service - OpenSSH per-connection server daemon (10.0.0.1:55934). Jul 11 05:24:27.104252 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 55934 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:27.105705 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:27.110407 systemd-logind[1554]: New session 22 of user core. Jul 11 05:24:27.124000 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 05:24:27.234926 sshd[4296]: Connection closed by 10.0.0.1 port 55934 Jul 11 05:24:27.235150 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:27.239322 systemd[1]: sshd@21-10.0.0.87:22-10.0.0.1:55934.service: Deactivated successfully. Jul 11 05:24:27.241604 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 05:24:27.242788 systemd-logind[1554]: Session 22 logged out. Waiting for processes to exit. Jul 11 05:24:27.244271 systemd-logind[1554]: Removed session 22. Jul 11 05:24:32.254572 systemd[1]: Started sshd@22-10.0.0.87:22-10.0.0.1:55610.service - OpenSSH per-connection server daemon (10.0.0.1:55610). Jul 11 05:24:32.316517 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 55610 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:32.318626 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:32.324184 systemd-logind[1554]: New session 23 of user core. Jul 11 05:24:32.336113 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 05:24:32.454573 sshd[4312]: Connection closed by 10.0.0.1 port 55610 Jul 11 05:24:32.454955 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:32.464594 systemd[1]: sshd@22-10.0.0.87:22-10.0.0.1:55610.service: Deactivated successfully. Jul 11 05:24:32.467155 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 05:24:32.468073 systemd-logind[1554]: Session 23 logged out. Waiting for processes to exit. Jul 11 05:24:32.472296 systemd[1]: Started sshd@23-10.0.0.87:22-10.0.0.1:55620.service - OpenSSH per-connection server daemon (10.0.0.1:55620). Jul 11 05:24:32.473536 systemd-logind[1554]: Removed session 23. Jul 11 05:24:32.528341 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 55620 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:32.530161 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:32.535162 systemd-logind[1554]: New session 24 of user core. Jul 11 05:24:32.546956 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 05:24:33.900205 kubelet[2701]: I0711 05:24:33.900127 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dlns5" podStartSLOduration=69.900107122 podStartE2EDuration="1m9.900107122s" podCreationTimestamp="2025-07-11 05:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:23:50.247076265 +0000 UTC m=+32.463013215" watchObservedRunningTime="2025-07-11 05:24:33.900107122 +0000 UTC m=+76.116044042" Jul 11 05:24:33.907487 containerd[1574]: time="2025-07-11T05:24:33.907428091Z" level=info msg="StopContainer for \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\" with timeout 30 (s)" Jul 11 05:24:33.917035 containerd[1574]: time="2025-07-11T05:24:33.916965105Z" level=info msg="Stop container \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\" with signal terminated" Jul 11 05:24:33.931838 systemd[1]: cri-containerd-f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e.scope: Deactivated successfully. Jul 11 05:24:33.934102 containerd[1574]: time="2025-07-11T05:24:33.934032350Z" level=info msg="received exit event container_id:\"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\" id:\"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\" pid:3271 exited_at:{seconds:1752211473 nanos:933472805}" Jul 11 05:24:33.934664 containerd[1574]: time="2025-07-11T05:24:33.934619206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\" id:\"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\" pid:3271 exited_at:{seconds:1752211473 nanos:933472805}" Jul 11 05:24:33.945521 containerd[1574]: time="2025-07-11T05:24:33.945453321Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 05:24:33.946544 containerd[1574]: time="2025-07-11T05:24:33.946397886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\" id:\"dd77f6de8a14fec081cd468829f48e5d6d8db55d4f71cf05e2cc6c8a49d84895\" pid:4350 exited_at:{seconds:1752211473 nanos:945979833}" Jul 11 05:24:33.949101 containerd[1574]: time="2025-07-11T05:24:33.949071018Z" level=info msg="StopContainer for \"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\" with timeout 2 (s)" Jul 11 05:24:33.949450 containerd[1574]: time="2025-07-11T05:24:33.949402595Z" level=info msg="Stop container \"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\" with signal terminated" Jul 11 05:24:33.958698 systemd-networkd[1491]: lxc_health: Link DOWN Jul 11 05:24:33.959073 systemd-networkd[1491]: lxc_health: Lost carrier Jul 11 05:24:33.965173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e-rootfs.mount: Deactivated successfully. Jul 11 05:24:33.979045 systemd[1]: cri-containerd-c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6.scope: Deactivated successfully. Jul 11 05:24:33.979455 systemd[1]: cri-containerd-c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6.scope: Consumed 6.862s CPU time, 125.5M memory peak, 156K read from disk, 13.3M written to disk. Jul 11 05:24:33.980262 containerd[1574]: time="2025-07-11T05:24:33.980080528Z" level=info msg="received exit event container_id:\"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\" id:\"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\" pid:3365 exited_at:{seconds:1752211473 nanos:979758328}" Jul 11 05:24:33.980262 containerd[1574]: time="2025-07-11T05:24:33.980089695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\" id:\"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\" pid:3365 exited_at:{seconds:1752211473 nanos:979758328}" Jul 11 05:24:33.995054 containerd[1574]: time="2025-07-11T05:24:33.994859869Z" level=info msg="StopContainer for \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\" returns successfully" Jul 11 05:24:33.997577 containerd[1574]: time="2025-07-11T05:24:33.997524756Z" level=info msg="StopPodSandbox for \"9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5\"" Jul 11 05:24:33.999502 containerd[1574]: time="2025-07-11T05:24:33.999453310Z" level=info msg="Container to stop \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 05:24:34.008516 systemd[1]: cri-containerd-9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5.scope: Deactivated successfully. Jul 11 05:24:34.011097 containerd[1574]: time="2025-07-11T05:24:34.010894719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5\" id:\"9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5\" pid:2979 exit_status:137 exited_at:{seconds:1752211474 nanos:10250553}" Jul 11 05:24:34.013176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6-rootfs.mount: Deactivated successfully. Jul 11 05:24:34.041839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5-rootfs.mount: Deactivated successfully. Jul 11 05:24:34.074452 containerd[1574]: time="2025-07-11T05:24:34.074390285Z" level=info msg="shim disconnected" id=9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5 namespace=k8s.io Jul 11 05:24:34.074452 containerd[1574]: time="2025-07-11T05:24:34.074431804Z" level=warning msg="cleaning up after shim disconnected" id=9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5 namespace=k8s.io Jul 11 05:24:34.081199 kubelet[2701]: E0711 05:24:34.081167 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:34.095953 containerd[1574]: time="2025-07-11T05:24:34.074442165Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 05:24:34.124254 containerd[1574]: time="2025-07-11T05:24:34.124193915Z" level=info msg="received exit event sandbox_id:\"9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5\" exit_status:137 exited_at:{seconds:1752211474 nanos:10250553}" Jul 11 05:24:34.126704 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5-shm.mount: Deactivated successfully. Jul 11 05:24:34.128542 containerd[1574]: time="2025-07-11T05:24:34.128496491Z" level=info msg="TearDown network for sandbox \"9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5\" successfully" Jul 11 05:24:34.128542 containerd[1574]: time="2025-07-11T05:24:34.128526438Z" level=info msg="StopPodSandbox for \"9aa86efec40b4632238cfb9df6c83530974cc6f7b7f8c3ab9c526dc540ea0eb5\" returns successfully" Jul 11 05:24:34.151958 containerd[1574]: time="2025-07-11T05:24:34.151839380Z" level=info msg="StopContainer for \"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\" returns successfully" Jul 11 05:24:34.153234 containerd[1574]: time="2025-07-11T05:24:34.153183920Z" level=info msg="StopPodSandbox for \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\"" Jul 11 05:24:34.153382 containerd[1574]: time="2025-07-11T05:24:34.153357883Z" level=info msg="Container to stop \"ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 05:24:34.153446 containerd[1574]: time="2025-07-11T05:24:34.153395356Z" level=info msg="Container to stop \"2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 05:24:34.153446 containerd[1574]: time="2025-07-11T05:24:34.153406967Z" level=info msg="Container to stop \"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 05:24:34.153446 containerd[1574]: time="2025-07-11T05:24:34.153418189Z" level=info msg="Container to stop \"909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 05:24:34.153446 containerd[1574]: time="2025-07-11T05:24:34.153428469Z" level=info msg="Container to stop \"de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 05:24:34.160610 systemd[1]: cri-containerd-64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f.scope: Deactivated successfully. Jul 11 05:24:34.162579 containerd[1574]: time="2025-07-11T05:24:34.162540296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" id:\"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" pid:2845 exit_status:137 exited_at:{seconds:1752211474 nanos:162242945}" Jul 11 05:24:34.188446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f-rootfs.mount: Deactivated successfully. Jul 11 05:24:34.253709 kubelet[2701]: I0711 05:24:34.253656 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4zv9\" (UniqueName: \"kubernetes.io/projected/8e46fd3c-980e-4011-8b17-19a738d40c89-kube-api-access-g4zv9\") pod \"8e46fd3c-980e-4011-8b17-19a738d40c89\" (UID: \"8e46fd3c-980e-4011-8b17-19a738d40c89\") " Jul 11 05:24:34.253709 kubelet[2701]: I0711 05:24:34.253718 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e46fd3c-980e-4011-8b17-19a738d40c89-cilium-config-path\") pod \"8e46fd3c-980e-4011-8b17-19a738d40c89\" (UID: \"8e46fd3c-980e-4011-8b17-19a738d40c89\") " Jul 11 05:24:34.258875 kubelet[2701]: I0711 05:24:34.258829 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e46fd3c-980e-4011-8b17-19a738d40c89-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e46fd3c-980e-4011-8b17-19a738d40c89" (UID: "8e46fd3c-980e-4011-8b17-19a738d40c89"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 05:24:34.259188 kubelet[2701]: I0711 05:24:34.259147 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e46fd3c-980e-4011-8b17-19a738d40c89-kube-api-access-g4zv9" (OuterVolumeSpecName: "kube-api-access-g4zv9") pod "8e46fd3c-980e-4011-8b17-19a738d40c89" (UID: "8e46fd3c-980e-4011-8b17-19a738d40c89"). InnerVolumeSpecName "kube-api-access-g4zv9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 05:24:34.299007 kubelet[2701]: I0711 05:24:34.298968 2701 scope.go:117] "RemoveContainer" containerID="f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e" Jul 11 05:24:34.300995 containerd[1574]: time="2025-07-11T05:24:34.300541667Z" level=info msg="RemoveContainer for \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\"" Jul 11 05:24:34.352410 systemd[1]: Removed slice kubepods-besteffort-pod8e46fd3c_980e_4011_8b17_19a738d40c89.slice - libcontainer container kubepods-besteffort-pod8e46fd3c_980e_4011_8b17_19a738d40c89.slice. Jul 11 05:24:34.354679 kubelet[2701]: I0711 05:24:34.354641 2701 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g4zv9\" (UniqueName: \"kubernetes.io/projected/8e46fd3c-980e-4011-8b17-19a738d40c89-kube-api-access-g4zv9\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.354679 kubelet[2701]: I0711 05:24:34.354671 2701 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e46fd3c-980e-4011-8b17-19a738d40c89-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.377559 containerd[1574]: time="2025-07-11T05:24:34.377408405Z" level=info msg="shim disconnected" id=64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f namespace=k8s.io Jul 11 05:24:34.377559 containerd[1574]: time="2025-07-11T05:24:34.377539907Z" level=warning msg="cleaning up after shim disconnected" id=64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f namespace=k8s.io Jul 11 05:24:34.377728 containerd[1574]: time="2025-07-11T05:24:34.377551940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 05:24:34.391253 containerd[1574]: time="2025-07-11T05:24:34.391186516Z" level=info msg="received exit event sandbox_id:\"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" exit_status:137 exited_at:{seconds:1752211474 nanos:162242945}" Jul 11 05:24:34.391407 containerd[1574]: time="2025-07-11T05:24:34.391341754Z" level=info msg="TearDown network for sandbox \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" successfully" Jul 11 05:24:34.391407 containerd[1574]: time="2025-07-11T05:24:34.391365430Z" level=info msg="StopPodSandbox for \"64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f\" returns successfully" Jul 11 05:24:34.422378 containerd[1574]: time="2025-07-11T05:24:34.422239992Z" level=info msg="RemoveContainer for \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\" returns successfully" Jul 11 05:24:34.432811 kubelet[2701]: I0711 05:24:34.432763 2701 scope.go:117] "RemoveContainer" containerID="f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e" Jul 11 05:24:34.433206 containerd[1574]: time="2025-07-11T05:24:34.433135090Z" level=error msg="ContainerStatus for \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\": not found" Jul 11 05:24:34.434744 kubelet[2701]: E0711 05:24:34.434655 2701 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\": not found" containerID="f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e" Jul 11 05:24:34.434841 kubelet[2701]: I0711 05:24:34.434747 2701 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e"} err="failed to get container status \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f39d407a9d6b85bffd984b4ddf06351357682f9907d46c8b2e36b1f5e8f8b28e\": not found" Jul 11 05:24:34.454838 kubelet[2701]: I0711 05:24:34.454803 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-clustermesh-secrets\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.454838 kubelet[2701]: I0711 05:24:34.454831 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-etc-cni-netd\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.454928 kubelet[2701]: I0711 05:24:34.454844 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-bpf-maps\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.454928 kubelet[2701]: I0711 05:24:34.454859 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-xtables-lock\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.454928 kubelet[2701]: I0711 05:24:34.454872 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cilium-cgroup\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.454928 kubelet[2701]: I0711 05:24:34.454887 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdc9b\" (UniqueName: \"kubernetes.io/projected/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-kube-api-access-fdc9b\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.454928 kubelet[2701]: I0711 05:24:34.454900 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cni-path\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.454928 kubelet[2701]: I0711 05:24:34.454916 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cilium-config-path\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.455122 kubelet[2701]: I0711 05:24:34.454930 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cilium-run\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.455122 kubelet[2701]: I0711 05:24:34.454929 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 05:24:34.455122 kubelet[2701]: I0711 05:24:34.454942 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-hostproc\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.455122 kubelet[2701]: I0711 05:24:34.454956 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-hubble-tls\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.455122 kubelet[2701]: I0711 05:24:34.454978 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-lib-modules\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.455122 kubelet[2701]: I0711 05:24:34.454999 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-host-proc-sys-net\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.455312 kubelet[2701]: I0711 05:24:34.455012 2701 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-host-proc-sys-kernel\") pod \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\" (UID: \"18f616e0-e8f7-4e47-b3fd-f2fd14382f5a\") " Jul 11 05:24:34.455312 kubelet[2701]: I0711 05:24:34.455052 2701 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.455312 kubelet[2701]: I0711 05:24:34.455075 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 05:24:34.455312 kubelet[2701]: I0711 05:24:34.455095 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cni-path" (OuterVolumeSpecName: "cni-path") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 05:24:34.458418 kubelet[2701]: I0711 05:24:34.458248 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 05:24:34.458418 kubelet[2701]: I0711 05:24:34.458283 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 05:24:34.458418 kubelet[2701]: I0711 05:24:34.458298 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 05:24:34.458418 kubelet[2701]: I0711 05:24:34.458313 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 05:24:34.458418 kubelet[2701]: I0711 05:24:34.458325 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-hostproc" (OuterVolumeSpecName: "hostproc") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 05:24:34.458582 kubelet[2701]: I0711 05:24:34.458338 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 05:24:34.458582 kubelet[2701]: I0711 05:24:34.458350 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 05:24:34.458582 kubelet[2701]: I0711 05:24:34.458362 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 05:24:34.458582 kubelet[2701]: I0711 05:24:34.458465 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-kube-api-access-fdc9b" (OuterVolumeSpecName: "kube-api-access-fdc9b") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "kube-api-access-fdc9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 05:24:34.458676 kubelet[2701]: I0711 05:24:34.458586 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 05:24:34.458676 kubelet[2701]: I0711 05:24:34.458589 2701 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" (UID: "18f616e0-e8f7-4e47-b3fd-f2fd14382f5a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 05:24:34.556145 kubelet[2701]: I0711 05:24:34.556096 2701 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.556145 kubelet[2701]: I0711 05:24:34.556136 2701 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fdc9b\" (UniqueName: \"kubernetes.io/projected/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-kube-api-access-fdc9b\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.556145 kubelet[2701]: I0711 05:24:34.556152 2701 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.556369 kubelet[2701]: I0711 05:24:34.556163 2701 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.556369 kubelet[2701]: I0711 05:24:34.556175 2701 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.556369 kubelet[2701]: I0711 05:24:34.556185 2701 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.556369 kubelet[2701]: I0711 05:24:34.556195 2701 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.556369 kubelet[2701]: I0711 05:24:34.556206 2701 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.556369 kubelet[2701]: I0711 05:24:34.556216 2701 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.556369 kubelet[2701]: I0711 05:24:34.556226 2701 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.556369 kubelet[2701]: I0711 05:24:34.556237 2701 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.556568 kubelet[2701]: I0711 05:24:34.556246 2701 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.556568 kubelet[2701]: I0711 05:24:34.556256 2701 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 11 05:24:34.964585 systemd[1]: var-lib-kubelet-pods-8e46fd3c\x2d980e\x2d4011\x2d8b17\x2d19a738d40c89-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg4zv9.mount: Deactivated successfully. Jul 11 05:24:34.964752 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64020c2f02f2d297818af114b14c9ec4f7ae741cd65655c3616f5273f0f3f72f-shm.mount: Deactivated successfully. Jul 11 05:24:34.964841 systemd[1]: var-lib-kubelet-pods-18f616e0\x2de8f7\x2d4e47\x2db3fd\x2df2fd14382f5a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfdc9b.mount: Deactivated successfully. Jul 11 05:24:34.964924 systemd[1]: var-lib-kubelet-pods-18f616e0\x2de8f7\x2d4e47\x2db3fd\x2df2fd14382f5a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 05:24:34.965021 systemd[1]: var-lib-kubelet-pods-18f616e0\x2de8f7\x2d4e47\x2db3fd\x2df2fd14382f5a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 05:24:35.081072 kubelet[2701]: E0711 05:24:35.080998 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:35.310207 kubelet[2701]: I0711 05:24:35.309941 2701 scope.go:117] "RemoveContainer" containerID="c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6" Jul 11 05:24:35.312221 containerd[1574]: time="2025-07-11T05:24:35.312182611Z" level=info msg="RemoveContainer for \"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\"" Jul 11 05:24:35.316182 systemd[1]: Removed slice kubepods-burstable-pod18f616e0_e8f7_4e47_b3fd_f2fd14382f5a.slice - libcontainer container kubepods-burstable-pod18f616e0_e8f7_4e47_b3fd_f2fd14382f5a.slice. Jul 11 05:24:35.316297 systemd[1]: kubepods-burstable-pod18f616e0_e8f7_4e47_b3fd_f2fd14382f5a.slice: Consumed 7.059s CPU time, 125.8M memory peak, 160K read from disk, 13.3M written to disk. Jul 11 05:24:35.479384 containerd[1574]: time="2025-07-11T05:24:35.479322035Z" level=info msg="RemoveContainer for \"c04ac98e41436ca2d1bf66e9af8652a36e518ef432a273fd46ef0f9047b61ac6\" returns successfully" Jul 11 05:24:35.479764 kubelet[2701]: I0711 05:24:35.479599 2701 scope.go:117] "RemoveContainer" containerID="2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3" Jul 11 05:24:35.481342 containerd[1574]: time="2025-07-11T05:24:35.481308405Z" level=info msg="RemoveContainer for \"2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3\"" Jul 11 05:24:35.567370 containerd[1574]: time="2025-07-11T05:24:35.567195754Z" level=info msg="RemoveContainer for \"2742cdc2c0cabc1341fed07e3c1e4e50a8945c14d11479e8d9feebf6246fa5e3\" returns successfully" Jul 11 05:24:35.567522 kubelet[2701]: I0711 05:24:35.567466 2701 scope.go:117] "RemoveContainer" containerID="ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03" Jul 11 05:24:35.570248 containerd[1574]: time="2025-07-11T05:24:35.570211498Z" level=info msg="RemoveContainer for \"ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03\"" Jul 11 05:24:35.632088 containerd[1574]: time="2025-07-11T05:24:35.631993444Z" level=info msg="RemoveContainer for \"ba7d8750740a60871e24554c53b9615bb82da73b246ae9554af975f4d049bb03\" returns successfully" Jul 11 05:24:35.632384 kubelet[2701]: I0711 05:24:35.632343 2701 scope.go:117] "RemoveContainer" containerID="de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336" Jul 11 05:24:35.634139 containerd[1574]: time="2025-07-11T05:24:35.634107548Z" level=info msg="RemoveContainer for \"de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336\"" Jul 11 05:24:35.639021 containerd[1574]: time="2025-07-11T05:24:35.638960916Z" level=info msg="RemoveContainer for \"de9195c07f44b5661aa26e5327c971e6dc17b910535dac51c73133ad57e3f336\" returns successfully" Jul 11 05:24:35.639312 kubelet[2701]: I0711 05:24:35.639262 2701 scope.go:117] "RemoveContainer" containerID="909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791" Jul 11 05:24:35.641803 containerd[1574]: time="2025-07-11T05:24:35.641261048Z" level=info msg="RemoveContainer for \"909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791\"" Jul 11 05:24:35.647043 containerd[1574]: time="2025-07-11T05:24:35.646982883Z" level=info msg="RemoveContainer for \"909d1e653c69ba0494bc46216e856e96793c023c4b7c91c51661ac2e14c2e791\" returns successfully" Jul 11 05:24:35.868365 sshd[4329]: Connection closed by 10.0.0.1 port 55620 Jul 11 05:24:35.868902 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:35.883079 systemd[1]: sshd@23-10.0.0.87:22-10.0.0.1:55620.service: Deactivated successfully. Jul 11 05:24:35.885454 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 05:24:35.886320 systemd-logind[1554]: Session 24 logged out. Waiting for processes to exit. Jul 11 05:24:35.889539 systemd[1]: Started sshd@24-10.0.0.87:22-10.0.0.1:55628.service - OpenSSH per-connection server daemon (10.0.0.1:55628). Jul 11 05:24:35.890824 systemd-logind[1554]: Removed session 24. Jul 11 05:24:35.970967 sshd[4484]: Accepted publickey for core from 10.0.0.1 port 55628 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:35.972852 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:35.977671 systemd-logind[1554]: New session 25 of user core. Jul 11 05:24:35.983013 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 05:24:36.083374 kubelet[2701]: I0711 05:24:36.083306 2701 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" path="/var/lib/kubelet/pods/18f616e0-e8f7-4e47-b3fd-f2fd14382f5a/volumes" Jul 11 05:24:36.084411 kubelet[2701]: I0711 05:24:36.084380 2701 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e46fd3c-980e-4011-8b17-19a738d40c89" path="/var/lib/kubelet/pods/8e46fd3c-980e-4011-8b17-19a738d40c89/volumes" Jul 11 05:24:36.832299 sshd[4488]: Connection closed by 10.0.0.1 port 55628 Jul 11 05:24:36.832995 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:36.844182 systemd[1]: sshd@24-10.0.0.87:22-10.0.0.1:55628.service: Deactivated successfully. Jul 11 05:24:36.848987 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 05:24:36.851406 systemd-logind[1554]: Session 25 logged out. Waiting for processes to exit. Jul 11 05:24:36.858009 systemd[1]: Started sshd@25-10.0.0.87:22-10.0.0.1:55640.service - OpenSSH per-connection server daemon (10.0.0.1:55640). Jul 11 05:24:36.861678 systemd-logind[1554]: Removed session 25. Jul 11 05:24:36.864356 kubelet[2701]: I0711 05:24:36.864317 2701 memory_manager.go:355] "RemoveStaleState removing state" podUID="18f616e0-e8f7-4e47-b3fd-f2fd14382f5a" containerName="cilium-agent" Jul 11 05:24:36.864356 kubelet[2701]: I0711 05:24:36.864348 2701 memory_manager.go:355] "RemoveStaleState removing state" podUID="8e46fd3c-980e-4011-8b17-19a738d40c89" containerName="cilium-operator" Jul 11 05:24:36.889778 systemd[1]: Created slice kubepods-burstable-podc1841a64_0575_4603_a733_a92b7ab26702.slice - libcontainer container kubepods-burstable-podc1841a64_0575_4603_a733_a92b7ab26702.slice. Jul 11 05:24:36.923175 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 55640 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:36.927774 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:36.937065 systemd-logind[1554]: New session 26 of user core. Jul 11 05:24:36.942958 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 05:24:36.974602 kubelet[2701]: I0711 05:24:36.974528 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1841a64-0575-4603-a733-a92b7ab26702-cni-path\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.974602 kubelet[2701]: I0711 05:24:36.974579 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1841a64-0575-4603-a733-a92b7ab26702-xtables-lock\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.974834 kubelet[2701]: I0711 05:24:36.974643 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1841a64-0575-4603-a733-a92b7ab26702-cilium-run\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.974834 kubelet[2701]: I0711 05:24:36.974669 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c1841a64-0575-4603-a733-a92b7ab26702-cilium-ipsec-secrets\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.974834 kubelet[2701]: I0711 05:24:36.974690 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1841a64-0575-4603-a733-a92b7ab26702-hubble-tls\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.974834 kubelet[2701]: I0711 05:24:36.974706 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1841a64-0575-4603-a733-a92b7ab26702-cilium-cgroup\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.974834 kubelet[2701]: I0711 05:24:36.974721 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1841a64-0575-4603-a733-a92b7ab26702-lib-modules\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.974834 kubelet[2701]: I0711 05:24:36.974759 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1841a64-0575-4603-a733-a92b7ab26702-host-proc-sys-net\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.975062 kubelet[2701]: I0711 05:24:36.974775 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1841a64-0575-4603-a733-a92b7ab26702-cilium-config-path\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.975062 kubelet[2701]: I0711 05:24:36.974789 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x9jm\" (UniqueName: \"kubernetes.io/projected/c1841a64-0575-4603-a733-a92b7ab26702-kube-api-access-4x9jm\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.975062 kubelet[2701]: I0711 05:24:36.974805 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1841a64-0575-4603-a733-a92b7ab26702-bpf-maps\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.975062 kubelet[2701]: I0711 05:24:36.974818 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1841a64-0575-4603-a733-a92b7ab26702-hostproc\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.975062 kubelet[2701]: I0711 05:24:36.974832 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1841a64-0575-4603-a733-a92b7ab26702-clustermesh-secrets\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.975062 kubelet[2701]: I0711 05:24:36.974857 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1841a64-0575-4603-a733-a92b7ab26702-etc-cni-netd\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:36.975213 kubelet[2701]: I0711 05:24:36.974887 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1841a64-0575-4603-a733-a92b7ab26702-host-proc-sys-kernel\") pod \"cilium-gxjmj\" (UID: \"c1841a64-0575-4603-a733-a92b7ab26702\") " pod="kube-system/cilium-gxjmj" Jul 11 05:24:37.000647 sshd[4503]: Connection closed by 10.0.0.1 port 55640 Jul 11 05:24:37.001221 sshd-session[4500]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:37.017633 systemd[1]: sshd@25-10.0.0.87:22-10.0.0.1:55640.service: Deactivated successfully. Jul 11 05:24:37.019979 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 05:24:37.020985 systemd-logind[1554]: Session 26 logged out. Waiting for processes to exit. Jul 11 05:24:37.024299 systemd[1]: Started sshd@26-10.0.0.87:22-10.0.0.1:55652.service - OpenSSH per-connection server daemon (10.0.0.1:55652). Jul 11 05:24:37.025162 systemd-logind[1554]: Removed session 26. Jul 11 05:24:37.118073 sshd[4510]: Accepted publickey for core from 10.0.0.1 port 55652 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:37.120277 sshd-session[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:37.125367 systemd-logind[1554]: New session 27 of user core. Jul 11 05:24:37.135048 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 11 05:24:37.204976 kubelet[2701]: E0711 05:24:37.204922 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:37.207353 containerd[1574]: time="2025-07-11T05:24:37.206956058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gxjmj,Uid:c1841a64-0575-4603-a733-a92b7ab26702,Namespace:kube-system,Attempt:0,}" Jul 11 05:24:37.228472 containerd[1574]: time="2025-07-11T05:24:37.228395389Z" level=info msg="connecting to shim 2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3" address="unix:///run/containerd/s/38ac34eabd5397d7834b12de6473623c4bcc72ac1b7c467a15bed32799c02c22" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:24:37.260067 systemd[1]: Started cri-containerd-2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3.scope - libcontainer container 2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3. Jul 11 05:24:37.289437 containerd[1574]: time="2025-07-11T05:24:37.289390349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gxjmj,Uid:c1841a64-0575-4603-a733-a92b7ab26702,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3\"" Jul 11 05:24:37.290207 kubelet[2701]: E0711 05:24:37.290176 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:37.292422 containerd[1574]: time="2025-07-11T05:24:37.292364546Z" level=info msg="CreateContainer within sandbox \"2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 05:24:37.300308 containerd[1574]: time="2025-07-11T05:24:37.300263008Z" level=info msg="Container c5699904734bdaca412b6c8c8dc84d290b059407106e5205682b679d70127641: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:37.308141 containerd[1574]: time="2025-07-11T05:24:37.308101835Z" level=info msg="CreateContainer within sandbox \"2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c5699904734bdaca412b6c8c8dc84d290b059407106e5205682b679d70127641\"" Jul 11 05:24:37.308618 containerd[1574]: time="2025-07-11T05:24:37.308595591Z" level=info msg="StartContainer for \"c5699904734bdaca412b6c8c8dc84d290b059407106e5205682b679d70127641\"" Jul 11 05:24:37.309709 containerd[1574]: time="2025-07-11T05:24:37.309677875Z" level=info msg="connecting to shim c5699904734bdaca412b6c8c8dc84d290b059407106e5205682b679d70127641" address="unix:///run/containerd/s/38ac34eabd5397d7834b12de6473623c4bcc72ac1b7c467a15bed32799c02c22" protocol=ttrpc version=3 Jul 11 05:24:37.336905 systemd[1]: Started cri-containerd-c5699904734bdaca412b6c8c8dc84d290b059407106e5205682b679d70127641.scope - libcontainer container c5699904734bdaca412b6c8c8dc84d290b059407106e5205682b679d70127641. Jul 11 05:24:37.400406 systemd[1]: cri-containerd-c5699904734bdaca412b6c8c8dc84d290b059407106e5205682b679d70127641.scope: Deactivated successfully. Jul 11 05:24:37.402650 containerd[1574]: time="2025-07-11T05:24:37.402604857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c5699904734bdaca412b6c8c8dc84d290b059407106e5205682b679d70127641\" id:\"c5699904734bdaca412b6c8c8dc84d290b059407106e5205682b679d70127641\" pid:4583 exited_at:{seconds:1752211477 nanos:402225390}" Jul 11 05:24:37.496167 containerd[1574]: time="2025-07-11T05:24:37.496111450Z" level=info msg="received exit event container_id:\"c5699904734bdaca412b6c8c8dc84d290b059407106e5205682b679d70127641\" id:\"c5699904734bdaca412b6c8c8dc84d290b059407106e5205682b679d70127641\" pid:4583 exited_at:{seconds:1752211477 nanos:402225390}" Jul 11 05:24:37.497519 containerd[1574]: time="2025-07-11T05:24:37.497493077Z" level=info msg="StartContainer for \"c5699904734bdaca412b6c8c8dc84d290b059407106e5205682b679d70127641\" returns successfully" Jul 11 05:24:38.139138 kubelet[2701]: E0711 05:24:38.139064 2701 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 05:24:38.322900 kubelet[2701]: E0711 05:24:38.322862 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:38.325914 containerd[1574]: time="2025-07-11T05:24:38.325861544Z" level=info msg="CreateContainer within sandbox \"2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 05:24:38.371933 containerd[1574]: time="2025-07-11T05:24:38.371878050Z" level=info msg="Container c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:38.374477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2666965876.mount: Deactivated successfully. Jul 11 05:24:38.379449 containerd[1574]: time="2025-07-11T05:24:38.379399091Z" level=info msg="CreateContainer within sandbox \"2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e\"" Jul 11 05:24:38.380042 containerd[1574]: time="2025-07-11T05:24:38.380010271Z" level=info msg="StartContainer for \"c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e\"" Jul 11 05:24:38.381396 containerd[1574]: time="2025-07-11T05:24:38.381046585Z" level=info msg="connecting to shim c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e" address="unix:///run/containerd/s/38ac34eabd5397d7834b12de6473623c4bcc72ac1b7c467a15bed32799c02c22" protocol=ttrpc version=3 Jul 11 05:24:38.408026 systemd[1]: Started cri-containerd-c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e.scope - libcontainer container c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e. Jul 11 05:24:38.440662 containerd[1574]: time="2025-07-11T05:24:38.440603397Z" level=info msg="StartContainer for \"c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e\" returns successfully" Jul 11 05:24:38.447242 systemd[1]: cri-containerd-c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e.scope: Deactivated successfully. Jul 11 05:24:38.447893 containerd[1574]: time="2025-07-11T05:24:38.447848248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e\" id:\"c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e\" pid:4628 exited_at:{seconds:1752211478 nanos:447527074}" Jul 11 05:24:38.448047 containerd[1574]: time="2025-07-11T05:24:38.447946496Z" level=info msg="received exit event container_id:\"c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e\" id:\"c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e\" pid:4628 exited_at:{seconds:1752211478 nanos:447527074}" Jul 11 05:24:39.082632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3f5724f31a761aeb80fab593eb336008f6ab0f1256c7cc5717141243999a36e-rootfs.mount: Deactivated successfully. Jul 11 05:24:39.326791 kubelet[2701]: E0711 05:24:39.326723 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:39.328697 containerd[1574]: time="2025-07-11T05:24:39.328650731Z" level=info msg="CreateContainer within sandbox \"2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 05:24:39.415949 containerd[1574]: time="2025-07-11T05:24:39.415803360Z" level=info msg="Container 6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:39.569210 containerd[1574]: time="2025-07-11T05:24:39.569129291Z" level=info msg="CreateContainer within sandbox \"2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714\"" Jul 11 05:24:39.569831 containerd[1574]: time="2025-07-11T05:24:39.569727055Z" level=info msg="StartContainer for \"6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714\"" Jul 11 05:24:39.571393 containerd[1574]: time="2025-07-11T05:24:39.571360961Z" level=info msg="connecting to shim 6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714" address="unix:///run/containerd/s/38ac34eabd5397d7834b12de6473623c4bcc72ac1b7c467a15bed32799c02c22" protocol=ttrpc version=3 Jul 11 05:24:39.596041 systemd[1]: Started cri-containerd-6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714.scope - libcontainer container 6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714. Jul 11 05:24:39.641646 systemd[1]: cri-containerd-6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714.scope: Deactivated successfully. Jul 11 05:24:39.643593 containerd[1574]: time="2025-07-11T05:24:39.642583766Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714\" id:\"6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714\" pid:4674 exited_at:{seconds:1752211479 nanos:642168441}" Jul 11 05:24:39.658899 containerd[1574]: time="2025-07-11T05:24:39.658830035Z" level=info msg="received exit event container_id:\"6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714\" id:\"6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714\" pid:4674 exited_at:{seconds:1752211479 nanos:642168441}" Jul 11 05:24:39.668982 containerd[1574]: time="2025-07-11T05:24:39.668655287Z" level=info msg="StartContainer for \"6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714\" returns successfully" Jul 11 05:24:39.682629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6321ba5cb7d8b089f870363189d9d4c843927a211619bd7bcd489023ca6d2714-rootfs.mount: Deactivated successfully. Jul 11 05:24:40.016109 kubelet[2701]: I0711 05:24:40.016044 2701 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T05:24:40Z","lastTransitionTime":"2025-07-11T05:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 11 05:24:40.080432 kubelet[2701]: E0711 05:24:40.080348 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:40.331839 kubelet[2701]: E0711 05:24:40.331684 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:40.333499 containerd[1574]: time="2025-07-11T05:24:40.333457268Z" level=info msg="CreateContainer within sandbox \"2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 05:24:40.351785 containerd[1574]: time="2025-07-11T05:24:40.351704879Z" level=info msg="Container 1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:40.352165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203694945.mount: Deactivated successfully. Jul 11 05:24:40.359937 containerd[1574]: time="2025-07-11T05:24:40.359892249Z" level=info msg="CreateContainer within sandbox \"2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f\"" Jul 11 05:24:40.360562 containerd[1574]: time="2025-07-11T05:24:40.360424527Z" level=info msg="StartContainer for \"1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f\"" Jul 11 05:24:40.361235 containerd[1574]: time="2025-07-11T05:24:40.361206252Z" level=info msg="connecting to shim 1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f" address="unix:///run/containerd/s/38ac34eabd5397d7834b12de6473623c4bcc72ac1b7c467a15bed32799c02c22" protocol=ttrpc version=3 Jul 11 05:24:40.385900 systemd[1]: Started cri-containerd-1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f.scope - libcontainer container 1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f. Jul 11 05:24:40.414550 systemd[1]: cri-containerd-1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f.scope: Deactivated successfully. Jul 11 05:24:40.415698 containerd[1574]: time="2025-07-11T05:24:40.415125340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f\" id:\"1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f\" pid:4713 exited_at:{seconds:1752211480 nanos:414900060}" Jul 11 05:24:40.417427 containerd[1574]: time="2025-07-11T05:24:40.417376045Z" level=info msg="received exit event container_id:\"1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f\" id:\"1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f\" pid:4713 exited_at:{seconds:1752211480 nanos:414900060}" Jul 11 05:24:40.419099 containerd[1574]: time="2025-07-11T05:24:40.419077489Z" level=info msg="StartContainer for \"1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f\" returns successfully" Jul 11 05:24:40.440090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ec5437957121399ae7599b7e4f48acb6b2c2fc04bf1471d2e820d09e51da06f-rootfs.mount: Deactivated successfully. Jul 11 05:24:41.080948 kubelet[2701]: E0711 05:24:41.080901 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:41.339345 kubelet[2701]: E0711 05:24:41.338644 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:41.344639 containerd[1574]: time="2025-07-11T05:24:41.344572462Z" level=info msg="CreateContainer within sandbox \"2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 05:24:41.355336 containerd[1574]: time="2025-07-11T05:24:41.355271424Z" level=info msg="Container 37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:41.364208 containerd[1574]: time="2025-07-11T05:24:41.364130841Z" level=info msg="CreateContainer within sandbox \"2c63af745f8fe2d10e06989619a24c7879902b8abf1ba21ab793848bd1901fd3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b\"" Jul 11 05:24:41.364766 containerd[1574]: time="2025-07-11T05:24:41.364720137Z" level=info msg="StartContainer for \"37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b\"" Jul 11 05:24:41.365824 containerd[1574]: time="2025-07-11T05:24:41.365794221Z" level=info msg="connecting to shim 37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b" address="unix:///run/containerd/s/38ac34eabd5397d7834b12de6473623c4bcc72ac1b7c467a15bed32799c02c22" protocol=ttrpc version=3 Jul 11 05:24:41.387902 systemd[1]: Started cri-containerd-37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b.scope - libcontainer container 37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b. Jul 11 05:24:41.431785 containerd[1574]: time="2025-07-11T05:24:41.431721562Z" level=info msg="StartContainer for \"37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b\" returns successfully" Jul 11 05:24:41.513663 containerd[1574]: time="2025-07-11T05:24:41.513613012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b\" id:\"787c65a46922d7a30c771fc47701be8a6ad45e4ead287770acc327dff8b220b5\" pid:4781 exited_at:{seconds:1752211481 nanos:513289395}" Jul 11 05:24:41.901764 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 11 05:24:42.344887 kubelet[2701]: E0711 05:24:42.344840 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:42.359163 kubelet[2701]: I0711 05:24:42.359075 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gxjmj" podStartSLOduration=6.359054722 podStartE2EDuration="6.359054722s" podCreationTimestamp="2025-07-11 05:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:24:42.35883346 +0000 UTC m=+84.574770411" watchObservedRunningTime="2025-07-11 05:24:42.359054722 +0000 UTC m=+84.574991642" Jul 11 05:24:43.347081 kubelet[2701]: E0711 05:24:43.347039 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:43.479366 containerd[1574]: time="2025-07-11T05:24:43.479257444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b\" id:\"18a20c984ae374fd68e9c134730f54e9a5ba77f69d187794643ec372adc414c9\" pid:4901 exit_status:1 exited_at:{seconds:1752211483 nanos:478519604}" Jul 11 05:24:44.349623 kubelet[2701]: E0711 05:24:44.349574 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:45.131417 systemd-networkd[1491]: lxc_health: Link UP Jul 11 05:24:45.134254 systemd-networkd[1491]: lxc_health: Gained carrier Jul 11 05:24:45.353770 kubelet[2701]: E0711 05:24:45.353591 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:45.617966 containerd[1574]: time="2025-07-11T05:24:45.617910165Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b\" id:\"9090340020bee9ed54db6657edd304bdad095a4c42546b430320510345680f16\" pid:5307 exited_at:{seconds:1752211485 nanos:617379503}" Jul 11 05:24:46.354979 kubelet[2701]: E0711 05:24:46.354923 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:46.365002 systemd-networkd[1491]: lxc_health: Gained IPv6LL Jul 11 05:24:47.357672 kubelet[2701]: E0711 05:24:47.357553 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:47.756335 kubelet[2701]: E0711 05:24:47.756243 2701 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46168->127.0.0.1:40305: write tcp 127.0.0.1:46168->127.0.0.1:40305: write: broken pipe Jul 11 05:24:47.757148 containerd[1574]: time="2025-07-11T05:24:47.757046571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b\" id:\"94b962426fa95447bf76b3401fff013f9181b2531f05065511562b8496e01e2b\" pid:5343 exited_at:{seconds:1752211487 nanos:746127223}" Jul 11 05:24:49.875330 containerd[1574]: time="2025-07-11T05:24:49.875194572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b\" id:\"83dad9b410b68a6bbc2c75aa418d48a668a7ef1b452107d83cbeda7f56aad352\" pid:5373 exited_at:{seconds:1752211489 nanos:874699499}" Jul 11 05:24:51.970399 containerd[1574]: time="2025-07-11T05:24:51.970354684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37d0fd3cb8656bc8f54b54232261e4b183d082bdd3907f1c6bbb38d84f08026b\" id:\"2548d44fa5ce09e58c1482cc8525e22872a8d484a58a9daaacfbbfa0aa39fea4\" pid:5397 exited_at:{seconds:1752211491 nanos:969828042}" Jul 11 05:24:51.972330 kubelet[2701]: E0711 05:24:51.972295 2701 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46194->127.0.0.1:40305: write tcp 127.0.0.1:46194->127.0.0.1:40305: write: broken pipe Jul 11 05:24:51.985424 sshd[4518]: Connection closed by 10.0.0.1 port 55652 Jul 11 05:24:51.985897 sshd-session[4510]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:51.990763 systemd[1]: sshd@26-10.0.0.87:22-10.0.0.1:55652.service: Deactivated successfully. Jul 11 05:24:51.992689 systemd[1]: session-27.scope: Deactivated successfully. Jul 11 05:24:51.993580 systemd-logind[1554]: Session 27 logged out. Waiting for processes to exit. Jul 11 05:24:51.994952 systemd-logind[1554]: Removed session 27.