Oct 13 05:37:24.392373 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 13 03:31:29 -00 2025 Oct 13 05:37:24.392398 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:37:24.392407 kernel: BIOS-provided physical RAM map: Oct 13 05:37:24.392414 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 13 05:37:24.392421 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 13 05:37:24.392428 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 13 05:37:24.392438 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 13 05:37:24.392445 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 13 05:37:24.392456 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 13 05:37:24.392463 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 13 05:37:24.392470 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:37:24.392476 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 13 05:37:24.392483 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:37:24.392490 kernel: NX (Execute Disable) protection: active Oct 13 05:37:24.392501 kernel: APIC: Static calls initialized Oct 13 05:37:24.392508 kernel: SMBIOS 2.8 present. Oct 13 05:37:24.392518 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 13 05:37:24.392526 kernel: DMI: Memory slots populated: 1/1 Oct 13 05:37:24.392533 kernel: Hypervisor detected: KVM Oct 13 05:37:24.392541 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 05:37:24.392548 kernel: kvm-clock: using sched offset of 3854873362 cycles Oct 13 05:37:24.392559 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 05:37:24.392567 kernel: tsc: Detected 2794.750 MHz processor Oct 13 05:37:24.392575 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:37:24.392584 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:37:24.392591 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 13 05:37:24.392599 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 13 05:37:24.392607 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:37:24.392617 kernel: Using GB pages for direct mapping Oct 13 05:37:24.392638 kernel: ACPI: Early table checksum verification disabled Oct 13 05:37:24.392646 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 13 05:37:24.392654 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:37:24.392662 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:37:24.392670 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:37:24.392677 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 13 05:37:24.392688 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:37:24.392696 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:37:24.392704 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:37:24.392711 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:37:24.392729 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 13 05:37:24.392740 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 13 05:37:24.392750 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 13 05:37:24.392758 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 13 05:37:24.392767 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 13 05:37:24.392774 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 13 05:37:24.392782 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 13 05:37:24.392790 kernel: No NUMA configuration found Oct 13 05:37:24.392801 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 13 05:37:24.392809 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 13 05:37:24.392817 kernel: Zone ranges: Oct 13 05:37:24.392825 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:37:24.392832 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 13 05:37:24.392840 kernel: Normal empty Oct 13 05:37:24.392848 kernel: Device empty Oct 13 05:37:24.392858 kernel: Movable zone start for each node Oct 13 05:37:24.392866 kernel: Early memory node ranges Oct 13 05:37:24.392874 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 13 05:37:24.392882 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 13 05:37:24.392890 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 13 05:37:24.392898 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:37:24.392906 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 13 05:37:24.392914 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 13 05:37:24.392924 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 13 05:37:24.392936 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 05:37:24.392944 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:37:24.392952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 13 05:37:24.392960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 05:37:24.392970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:37:24.392978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 05:37:24.392995 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 05:37:24.393004 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:37:24.393012 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 13 05:37:24.393020 kernel: TSC deadline timer available Oct 13 05:37:24.393028 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:37:24.393036 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:37:24.393044 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:37:24.393054 kernel: CPU topo: Max. threads per core: 1 Oct 13 05:37:24.393062 kernel: CPU topo: Num. cores per package: 4 Oct 13 05:37:24.393070 kernel: CPU topo: Num. threads per package: 4 Oct 13 05:37:24.393078 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 13 05:37:24.393086 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 05:37:24.393093 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 13 05:37:24.393101 kernel: kvm-guest: setup PV sched yield Oct 13 05:37:24.393109 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 13 05:37:24.393120 kernel: Booting paravirtualized kernel on KVM Oct 13 05:37:24.393128 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:37:24.393136 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 13 05:37:24.393144 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 13 05:37:24.393152 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 13 05:37:24.393160 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 13 05:37:24.393168 kernel: kvm-guest: PV spinlocks enabled Oct 13 05:37:24.393178 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 05:37:24.393187 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:37:24.393196 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:37:24.393204 kernel: random: crng init done Oct 13 05:37:24.393212 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 05:37:24.393220 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:37:24.393230 kernel: Fallback order for Node 0: 0 Oct 13 05:37:24.393238 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 13 05:37:24.393246 kernel: Policy zone: DMA32 Oct 13 05:37:24.393254 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:37:24.393262 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 05:37:24.393270 kernel: ftrace: allocating 40210 entries in 158 pages Oct 13 05:37:24.393280 kernel: ftrace: allocated 158 pages with 5 groups Oct 13 05:37:24.393289 kernel: Dynamic Preempt: voluntary Oct 13 05:37:24.393301 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:37:24.393310 kernel: rcu: RCU event tracing is enabled. Oct 13 05:37:24.393318 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 05:37:24.393326 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:37:24.393337 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:37:24.393345 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:37:24.393353 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:37:24.393360 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 05:37:24.393371 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:37:24.393379 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:37:24.393387 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:37:24.393395 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 13 05:37:24.393403 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:37:24.393420 kernel: Console: colour VGA+ 80x25 Oct 13 05:37:24.393428 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:37:24.393436 kernel: ACPI: Core revision 20240827 Oct 13 05:37:24.393445 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 13 05:37:24.393455 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:37:24.393464 kernel: x2apic enabled Oct 13 05:37:24.393474 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:37:24.393483 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 13 05:37:24.393492 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 13 05:37:24.393502 kernel: kvm-guest: setup PV IPIs Oct 13 05:37:24.393510 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 05:37:24.393519 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 13 05:37:24.393528 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 13 05:37:24.393536 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 05:37:24.393544 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 13 05:37:24.393555 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 13 05:37:24.393563 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:37:24.393572 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:37:24.393580 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:37:24.393588 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 13 05:37:24.393597 kernel: active return thunk: retbleed_return_thunk Oct 13 05:37:24.393605 kernel: RETBleed: Mitigation: untrained return thunk Oct 13 05:37:24.393615 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 05:37:24.393637 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 05:37:24.393646 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 13 05:37:24.393655 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 13 05:37:24.393674 kernel: active return thunk: srso_return_thunk Oct 13 05:37:24.393683 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 13 05:37:24.393695 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:37:24.393703 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:37:24.393712 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:37:24.393720 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:37:24.393728 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 13 05:37:24.393737 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:37:24.393745 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:37:24.393756 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:37:24.393764 kernel: landlock: Up and running. Oct 13 05:37:24.393772 kernel: SELinux: Initializing. Oct 13 05:37:24.393783 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:37:24.393792 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:37:24.393800 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 13 05:37:24.393809 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 13 05:37:24.393819 kernel: ... version: 0 Oct 13 05:37:24.393827 kernel: ... bit width: 48 Oct 13 05:37:24.393836 kernel: ... generic registers: 6 Oct 13 05:37:24.393844 kernel: ... value mask: 0000ffffffffffff Oct 13 05:37:24.393852 kernel: ... max period: 00007fffffffffff Oct 13 05:37:24.393860 kernel: ... fixed-purpose events: 0 Oct 13 05:37:24.393869 kernel: ... event mask: 000000000000003f Oct 13 05:37:24.393877 kernel: signal: max sigframe size: 1776 Oct 13 05:37:24.393888 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:37:24.393896 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:37:24.393905 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:37:24.393914 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:37:24.393922 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:37:24.393930 kernel: .... node #0, CPUs: #1 #2 #3 Oct 13 05:37:24.393938 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 05:37:24.393949 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 13 05:37:24.393958 kernel: Memory: 2459628K/2571752K available (14336K kernel code, 2450K rwdata, 10012K rodata, 24532K init, 1684K bss, 106184K reserved, 0K cma-reserved) Oct 13 05:37:24.393966 kernel: devtmpfs: initialized Oct 13 05:37:24.393974 kernel: x86/mm: Memory block size: 128MB Oct 13 05:37:24.393983 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:37:24.393998 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 05:37:24.394006 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:37:24.394017 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:37:24.394025 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:37:24.394034 kernel: audit: type=2000 audit(1760333841.843:1): state=initialized audit_enabled=0 res=1 Oct 13 05:37:24.394042 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:37:24.394051 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:37:24.394059 kernel: cpuidle: using governor menu Oct 13 05:37:24.394067 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:37:24.394078 kernel: dca service started, version 1.12.1 Oct 13 05:37:24.394086 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 13 05:37:24.394094 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 13 05:37:24.394103 kernel: PCI: Using configuration type 1 for base access Oct 13 05:37:24.394111 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:37:24.394120 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:37:24.394128 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:37:24.394139 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:37:24.394147 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:37:24.394155 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:37:24.394163 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:37:24.394172 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:37:24.394180 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:37:24.394191 kernel: ACPI: Interpreter enabled Oct 13 05:37:24.394209 kernel: ACPI: PM: (supports S0 S3 S5) Oct 13 05:37:24.394226 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:37:24.394238 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:37:24.394250 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:37:24.394262 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 13 05:37:24.394274 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 05:37:24.394554 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:37:24.394804 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 13 05:37:24.395034 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 13 05:37:24.395051 kernel: PCI host bridge to bus 0000:00 Oct 13 05:37:24.395266 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:37:24.395463 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 05:37:24.395691 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:37:24.395883 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 13 05:37:24.396054 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 13 05:37:24.396213 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 13 05:37:24.396392 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 05:37:24.396586 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:37:24.396810 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:37:24.396985 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 13 05:37:24.397175 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 13 05:37:24.397345 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 13 05:37:24.397514 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:37:24.397739 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 05:37:24.397981 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 13 05:37:24.398227 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 13 05:37:24.398463 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 13 05:37:24.398724 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 13 05:37:24.398953 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 13 05:37:24.399198 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 13 05:37:24.399418 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 13 05:37:24.399671 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 13 05:37:24.399891 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 13 05:37:24.400121 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 13 05:37:24.400339 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 13 05:37:24.400593 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 13 05:37:24.400873 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:37:24.401103 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 13 05:37:24.401323 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 13 05:37:24.401532 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 13 05:37:24.401772 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 13 05:37:24.402024 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 13 05:37:24.402257 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 13 05:37:24.402277 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 05:37:24.402290 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 05:37:24.402301 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:37:24.402314 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 05:37:24.402331 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 13 05:37:24.402343 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 13 05:37:24.402355 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 13 05:37:24.402367 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 13 05:37:24.402379 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 13 05:37:24.402391 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 13 05:37:24.402401 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 13 05:37:24.402413 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 13 05:37:24.402421 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 13 05:37:24.402430 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 13 05:37:24.402438 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 13 05:37:24.402446 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 13 05:37:24.402455 kernel: iommu: Default domain type: Translated Oct 13 05:37:24.402463 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:37:24.402474 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:37:24.402482 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:37:24.402490 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 13 05:37:24.402499 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 13 05:37:24.402731 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 13 05:37:24.402963 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 13 05:37:24.403202 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:37:24.403225 kernel: vgaarb: loaded Oct 13 05:37:24.403238 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 13 05:37:24.403250 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 13 05:37:24.403262 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 05:37:24.403274 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:37:24.403286 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:37:24.403298 kernel: pnp: PnP ACPI init Oct 13 05:37:24.403527 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 13 05:37:24.403545 kernel: pnp: PnP ACPI: found 6 devices Oct 13 05:37:24.403557 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:37:24.403569 kernel: NET: Registered PF_INET protocol family Oct 13 05:37:24.403581 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 05:37:24.403593 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 05:37:24.403609 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:37:24.403621 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:37:24.403653 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 05:37:24.403665 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 05:37:24.403677 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:37:24.403689 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:37:24.403701 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:37:24.403717 kernel: NET: Registered PF_XDP protocol family Oct 13 05:37:24.403926 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 05:37:24.404148 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 05:37:24.404358 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 05:37:24.404544 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 13 05:37:24.404739 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 13 05:37:24.404900 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 13 05:37:24.404917 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:37:24.404926 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 13 05:37:24.404935 kernel: Initialise system trusted keyrings Oct 13 05:37:24.404943 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 05:37:24.404952 kernel: Key type asymmetric registered Oct 13 05:37:24.404960 kernel: Asymmetric key parser 'x509' registered Oct 13 05:37:24.404969 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:37:24.404980 kernel: io scheduler mq-deadline registered Oct 13 05:37:24.404996 kernel: io scheduler kyber registered Oct 13 05:37:24.405005 kernel: io scheduler bfq registered Oct 13 05:37:24.405014 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:37:24.405023 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 13 05:37:24.405032 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 13 05:37:24.405040 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 13 05:37:24.405052 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:37:24.405061 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:37:24.405070 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 05:37:24.405078 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:37:24.405087 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:37:24.405095 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:37:24.405273 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 13 05:37:24.405492 kernel: rtc_cmos 00:04: registered as rtc0 Oct 13 05:37:24.405771 kernel: rtc_cmos 00:04: setting system clock to 2025-10-13T05:37:22 UTC (1760333842) Oct 13 05:37:24.406268 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 13 05:37:24.406311 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 13 05:37:24.406340 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:37:24.406365 kernel: Segment Routing with IPv6 Oct 13 05:37:24.406403 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:37:24.406425 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:37:24.406447 kernel: Key type dns_resolver registered Oct 13 05:37:24.406465 kernel: IPI shorthand broadcast: enabled Oct 13 05:37:24.406477 kernel: sched_clock: Marking stable (1293003722, 211202504)->(1635846161, -131639935) Oct 13 05:37:24.406489 kernel: registered taskstats version 1 Oct 13 05:37:24.406501 kernel: Loading compiled-in X.509 certificates Oct 13 05:37:24.406513 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 9f1258ccc510afd4f2a37f4774c4b2e958d823b7' Oct 13 05:37:24.406528 kernel: Demotion targets for Node 0: null Oct 13 05:37:24.406540 kernel: Key type .fscrypt registered Oct 13 05:37:24.406551 kernel: Key type fscrypt-provisioning registered Oct 13 05:37:24.406563 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:37:24.406576 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:37:24.406587 kernel: ima: No architecture policies found Oct 13 05:37:24.406603 kernel: clk: Disabling unused clocks Oct 13 05:37:24.406614 kernel: Freeing unused kernel image (initmem) memory: 24532K Oct 13 05:37:24.406644 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:37:24.406658 kernel: Freeing unused kernel image (rodata/data gap) memory: 228K Oct 13 05:37:24.406670 kernel: Run /init as init process Oct 13 05:37:24.406682 kernel: with arguments: Oct 13 05:37:24.406693 kernel: /init Oct 13 05:37:24.406705 kernel: with environment: Oct 13 05:37:24.406720 kernel: HOME=/ Oct 13 05:37:24.406732 kernel: TERM=linux Oct 13 05:37:24.406744 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:37:24.406755 kernel: SCSI subsystem initialized Oct 13 05:37:24.406768 kernel: libata version 3.00 loaded. Oct 13 05:37:24.407027 kernel: ahci 0000:00:1f.2: version 3.0 Oct 13 05:37:24.407046 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 13 05:37:24.407273 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 13 05:37:24.407495 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 13 05:37:24.407743 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 13 05:37:24.408010 kernel: scsi host0: ahci Oct 13 05:37:24.408256 kernel: scsi host1: ahci Oct 13 05:37:24.408492 kernel: scsi host2: ahci Oct 13 05:37:24.408733 kernel: scsi host3: ahci Oct 13 05:37:24.408958 kernel: scsi host4: ahci Oct 13 05:37:24.409225 kernel: scsi host5: ahci Oct 13 05:37:24.409243 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Oct 13 05:37:24.409253 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Oct 13 05:37:24.409267 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Oct 13 05:37:24.409276 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Oct 13 05:37:24.409285 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Oct 13 05:37:24.409293 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Oct 13 05:37:24.409302 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 13 05:37:24.409311 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 13 05:37:24.409319 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 13 05:37:24.409331 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 13 05:37:24.409340 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 13 05:37:24.409348 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 13 05:37:24.409357 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:37:24.409366 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 13 05:37:24.409374 kernel: ata3.00: applying bridge limits Oct 13 05:37:24.409383 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:37:24.409394 kernel: ata3.00: configured for UDMA/100 Oct 13 05:37:24.409600 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 13 05:37:24.409810 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 13 05:37:24.409985 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 13 05:37:24.410010 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 05:37:24.410019 kernel: GPT:16515071 != 27000831 Oct 13 05:37:24.410032 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 05:37:24.410041 kernel: GPT:16515071 != 27000831 Oct 13 05:37:24.410050 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 05:37:24.410058 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:37:24.410068 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:37:24.410261 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 13 05:37:24.410273 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:37:24.410465 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 13 05:37:24.410478 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:37:24.410486 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:37:24.410495 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:37:24.410504 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 13 05:37:24.410514 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:37:24.410525 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:37:24.410534 kernel: raid6: avx2x4 gen() 30455 MB/s Oct 13 05:37:24.410545 kernel: raid6: avx2x2 gen() 31004 MB/s Oct 13 05:37:24.410553 kernel: raid6: avx2x1 gen() 25942 MB/s Oct 13 05:37:24.410562 kernel: raid6: using algorithm avx2x2 gen() 31004 MB/s Oct 13 05:37:24.410573 kernel: raid6: .... xor() 19955 MB/s, rmw enabled Oct 13 05:37:24.410582 kernel: raid6: using avx2x2 recovery algorithm Oct 13 05:37:24.410591 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:37:24.410599 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:37:24.410607 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:37:24.410616 kernel: xor: automatically using best checksumming function avx Oct 13 05:37:24.410625 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:37:24.410647 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:37:24.410659 kernel: BTRFS: device fsid e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (194) Oct 13 05:37:24.410668 kernel: BTRFS info (device dm-0): first mount of filesystem e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 Oct 13 05:37:24.410677 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:37:24.410686 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:37:24.410695 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:37:24.410704 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:37:24.410712 kernel: loop: module loaded Oct 13 05:37:24.410723 kernel: loop0: detected capacity change from 0 to 100048 Oct 13 05:37:24.410732 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:37:24.410742 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:37:24.410754 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:37:24.410764 systemd[1]: Detected virtualization kvm. Oct 13 05:37:24.410773 systemd[1]: Detected architecture x86-64. Oct 13 05:37:24.410784 systemd[1]: Running in initrd. Oct 13 05:37:24.410793 systemd[1]: No hostname configured, using default hostname. Oct 13 05:37:24.410803 systemd[1]: Hostname set to . Oct 13 05:37:24.410812 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:37:24.410821 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:37:24.410830 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:37:24.410839 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:37:24.410851 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:37:24.410861 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:37:24.410870 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:37:24.410880 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:37:24.410889 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:37:24.410901 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:37:24.410910 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:37:24.410919 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:37:24.410929 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:37:24.410938 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:37:24.410947 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:37:24.410956 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:37:24.410967 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:37:24.410976 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:37:24.410986 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:37:24.411006 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:37:24.411015 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:37:24.411024 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:37:24.411034 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:37:24.411045 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:37:24.411055 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:37:24.411064 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:37:24.411074 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:37:24.411083 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:37:24.411093 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:37:24.411104 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:37:24.411113 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:37:24.411122 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:37:24.411132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:37:24.411141 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:37:24.411153 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:37:24.411162 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:37:24.411171 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:37:24.411203 systemd-journald[330]: Collecting audit messages is disabled. Oct 13 05:37:24.411227 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:37:24.411236 systemd-journald[330]: Journal started Oct 13 05:37:24.411255 systemd-journald[330]: Runtime Journal (/run/log/journal/b97031c532454d9bae01b892123c3642) is 6M, max 48.6M, 42.5M free. Oct 13 05:37:24.424653 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:37:24.429648 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:37:24.433702 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:37:24.435748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:37:24.456763 kernel: Bridge firewalling registered Oct 13 05:37:24.459540 systemd-modules-load[331]: Inserted module 'br_netfilter' Oct 13 05:37:24.465066 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:37:24.467153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:37:24.473527 systemd-tmpfiles[349]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:37:24.537374 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:37:24.542058 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:37:24.546137 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:37:24.574430 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:37:24.588247 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:37:24.589994 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:37:24.610809 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:37:24.615701 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:37:24.644334 dracut-cmdline[373]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:37:24.666912 systemd-resolved[364]: Positive Trust Anchors: Oct 13 05:37:24.666927 systemd-resolved[364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:37:24.666933 systemd-resolved[364]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:37:24.666972 systemd-resolved[364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:37:24.687645 systemd-resolved[364]: Defaulting to hostname 'linux'. Oct 13 05:37:24.690370 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:37:24.690527 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:37:24.777676 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:37:24.791662 kernel: iscsi: registered transport (tcp) Oct 13 05:37:24.817685 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:37:24.817749 kernel: QLogic iSCSI HBA Driver Oct 13 05:37:24.846807 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:37:24.867740 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:37:24.868731 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:37:24.943396 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:37:24.945150 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:37:24.950032 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:37:24.988467 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:37:24.990348 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:37:25.024617 systemd-udevd[609]: Using default interface naming scheme 'v257'. Oct 13 05:37:25.040246 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:37:25.043147 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:37:25.070719 dracut-pre-trigger[664]: rd.md=0: removing MD RAID activation Oct 13 05:37:25.096488 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:37:25.102620 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:37:25.106401 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:37:25.117577 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:37:25.160412 systemd-networkd[744]: lo: Link UP Oct 13 05:37:25.160422 systemd-networkd[744]: lo: Gained carrier Oct 13 05:37:25.161852 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:37:25.164827 systemd[1]: Reached target network.target - Network. Oct 13 05:37:25.219132 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:37:25.226128 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:37:25.263301 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 05:37:25.289707 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 05:37:25.309401 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 05:37:25.321671 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:37:25.337668 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 13 05:37:25.346649 kernel: AES CTR mode by8 optimization enabled Oct 13 05:37:25.357894 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:37:25.365825 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:37:25.379833 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:37:25.379907 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:37:25.383403 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:37:25.383408 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:37:25.395962 disk-uuid[846]: Primary Header is updated. Oct 13 05:37:25.395962 disk-uuid[846]: Secondary Entries is updated. Oct 13 05:37:25.395962 disk-uuid[846]: Secondary Header is updated. Oct 13 05:37:25.383841 systemd-networkd[744]: eth0: Link UP Oct 13 05:37:25.385459 systemd-networkd[744]: eth0: Gained carrier Oct 13 05:37:25.385469 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:37:25.389001 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:37:25.393782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:37:25.398476 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:37:25.406678 systemd-networkd[744]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:37:25.412502 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:37:25.420184 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:37:25.422187 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:37:25.444684 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:37:25.529921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:37:25.551050 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:37:26.442771 disk-uuid[855]: Warning: The kernel is still using the old partition table. Oct 13 05:37:26.442771 disk-uuid[855]: The new table will be used at the next reboot or after you Oct 13 05:37:26.442771 disk-uuid[855]: run partprobe(8) or kpartx(8) Oct 13 05:37:26.442771 disk-uuid[855]: The operation has completed successfully. Oct 13 05:37:26.458831 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:37:26.459172 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:37:26.463558 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:37:26.836181 systemd-networkd[744]: eth0: Gained IPv6LL Oct 13 05:37:26.842662 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Oct 13 05:37:26.846351 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:37:26.846382 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:37:26.850297 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:37:26.850319 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:37:26.859654 kernel: BTRFS info (device vda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:37:26.861047 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:37:26.864811 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:37:27.116708 ignition[904]: Ignition 2.22.0 Oct 13 05:37:27.118086 ignition[904]: Stage: fetch-offline Oct 13 05:37:27.118217 ignition[904]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:37:27.118235 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:37:27.118401 ignition[904]: parsed url from cmdline: "" Oct 13 05:37:27.118405 ignition[904]: no config URL provided Oct 13 05:37:27.118410 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:37:27.118426 ignition[904]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:37:27.118497 ignition[904]: op(1): [started] loading QEMU firmware config module Oct 13 05:37:27.118506 ignition[904]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 05:37:27.130711 ignition[904]: op(1): [finished] loading QEMU firmware config module Oct 13 05:37:27.209405 ignition[904]: parsing config with SHA512: 470bcfe7df2c00ebe6a20f5e22e49ec96d126ab0eaa0ad8502d2721a593b29e45bf7751cb22cc5d9153dea9024b5def153a369c1ce0c201d8a0e5115e7f4c885 Oct 13 05:37:27.214621 unknown[904]: fetched base config from "system" Oct 13 05:37:27.214653 unknown[904]: fetched user config from "qemu" Oct 13 05:37:27.215336 ignition[904]: fetch-offline: fetch-offline passed Oct 13 05:37:27.215399 ignition[904]: Ignition finished successfully Oct 13 05:37:27.218724 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:37:27.220991 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 05:37:27.222048 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:37:27.331190 ignition[915]: Ignition 2.22.0 Oct 13 05:37:27.331204 ignition[915]: Stage: kargs Oct 13 05:37:27.331404 ignition[915]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:37:27.331416 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:37:27.332688 ignition[915]: kargs: kargs passed Oct 13 05:37:27.332737 ignition[915]: Ignition finished successfully Oct 13 05:37:27.337652 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:37:27.340557 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:37:27.387283 ignition[923]: Ignition 2.22.0 Oct 13 05:37:27.387300 ignition[923]: Stage: disks Oct 13 05:37:27.387475 ignition[923]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:37:27.387486 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:37:27.391513 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:37:27.388492 ignition[923]: disks: disks passed Oct 13 05:37:27.394885 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:37:27.388551 ignition[923]: Ignition finished successfully Oct 13 05:37:27.398035 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:37:27.401073 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:37:27.401158 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:37:27.401423 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:37:27.403205 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:37:27.440401 systemd-fsck[933]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 13 05:37:27.448151 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:37:27.449895 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:37:27.649671 kernel: EXT4-fs (vda9): mounted filesystem c7d6ef00-6dd1-40b4-91f2-c4c5965e3cac r/w with ordered data mode. Quota mode: none. Oct 13 05:37:27.650426 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:37:27.652431 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:37:27.656826 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:37:27.657848 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:37:27.659802 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 05:37:27.659836 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:37:27.659862 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:37:27.690079 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:37:27.698485 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (942) Oct 13 05:37:27.698517 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:37:27.698533 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:37:27.693461 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:37:27.704756 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:37:27.704788 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:37:27.706660 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:37:27.761505 initrd-setup-root[966]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:37:27.767142 initrd-setup-root[973]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:37:27.773412 initrd-setup-root[980]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:37:27.778443 initrd-setup-root[987]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:37:27.892243 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:37:27.896048 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:37:27.898383 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:37:27.926229 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:37:27.928867 kernel: BTRFS info (device vda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:37:27.947812 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:37:27.995297 ignition[1056]: INFO : Ignition 2.22.0 Oct 13 05:37:27.996971 ignition[1056]: INFO : Stage: mount Oct 13 05:37:27.998211 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:37:27.998211 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:37:28.005301 ignition[1056]: INFO : mount: mount passed Oct 13 05:37:28.006642 ignition[1056]: INFO : Ignition finished successfully Oct 13 05:37:28.010028 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:37:28.014699 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:37:28.652799 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:37:28.685696 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1068) Oct 13 05:37:28.685779 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:37:28.685792 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:37:28.690922 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:37:28.690975 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:37:28.693072 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:37:28.739702 ignition[1085]: INFO : Ignition 2.22.0 Oct 13 05:37:28.739702 ignition[1085]: INFO : Stage: files Oct 13 05:37:28.742458 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:37:28.742458 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:37:28.742458 ignition[1085]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:37:28.742458 ignition[1085]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:37:28.742458 ignition[1085]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:37:28.752600 ignition[1085]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:37:28.752600 ignition[1085]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:37:28.752600 ignition[1085]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:37:28.752600 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:37:28.752600 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:37:28.746219 unknown[1085]: wrote ssh authorized keys file for user: core Oct 13 05:37:28.795041 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:37:28.883704 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:37:28.883704 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 13 05:37:28.889983 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 13 05:37:29.097457 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 13 05:37:29.285511 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 13 05:37:29.285511 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:37:29.291492 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:37:29.291492 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:37:29.291492 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:37:29.291492 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:37:29.291492 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:37:29.291492 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:37:29.291492 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:37:29.291492 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:37:29.291492 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:37:29.291492 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:37:29.320710 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:37:29.320710 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:37:29.320710 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 13 05:37:29.523753 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 13 05:37:30.172795 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:37:30.172795 ignition[1085]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 13 05:37:30.178853 ignition[1085]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:37:30.183977 ignition[1085]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:37:30.183977 ignition[1085]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 13 05:37:30.183977 ignition[1085]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 13 05:37:30.191472 ignition[1085]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:37:30.191472 ignition[1085]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:37:30.191472 ignition[1085]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 13 05:37:30.191472 ignition[1085]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 05:37:30.214867 ignition[1085]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:37:30.225494 ignition[1085]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:37:30.228215 ignition[1085]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 05:37:30.228215 ignition[1085]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:37:30.228215 ignition[1085]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:37:30.228215 ignition[1085]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:37:30.228215 ignition[1085]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:37:30.228215 ignition[1085]: INFO : files: files passed Oct 13 05:37:30.228215 ignition[1085]: INFO : Ignition finished successfully Oct 13 05:37:30.239163 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:37:30.248931 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:37:30.253807 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:37:30.275727 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:37:30.275881 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:37:30.282387 initrd-setup-root-after-ignition[1114]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 05:37:30.287343 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:37:30.287343 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:37:30.292675 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:37:30.294256 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:37:30.297873 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:37:30.303110 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:37:30.404224 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:37:30.404418 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:37:30.411198 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:37:30.414688 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:37:30.418969 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:37:30.422313 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:37:30.472324 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:37:30.474215 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:37:30.501743 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:37:30.501998 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:37:30.508266 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:37:30.512184 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:37:30.512905 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:37:30.513036 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:37:30.520951 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:37:30.521154 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:37:30.524303 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:37:30.527169 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:37:30.530667 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:37:30.534394 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:37:30.538008 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:37:30.541690 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:37:30.543217 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:37:30.549062 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:37:30.552225 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:37:30.553069 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:37:30.553210 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:37:30.560798 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:37:30.564358 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:37:30.566122 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:37:30.569883 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:37:30.571600 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:37:30.571764 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:37:30.578920 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:37:30.579043 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:37:30.580727 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:37:30.584091 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:37:30.587696 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:37:30.588661 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:37:30.592200 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:37:30.595927 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:37:30.596020 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:37:30.598795 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:37:30.598898 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:37:30.601757 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:37:30.601887 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:37:30.604671 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:37:30.604782 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:37:30.612860 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:37:30.615239 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:37:30.617834 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:37:30.617971 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:37:30.622271 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:37:30.622392 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:37:30.623103 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:37:30.623207 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:37:30.642692 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:37:30.642819 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:37:30.667955 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:37:30.670081 ignition[1140]: INFO : Ignition 2.22.0 Oct 13 05:37:30.670081 ignition[1140]: INFO : Stage: umount Oct 13 05:37:30.673076 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:37:30.673076 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:37:30.673076 ignition[1140]: INFO : umount: umount passed Oct 13 05:37:30.673076 ignition[1140]: INFO : Ignition finished successfully Oct 13 05:37:30.677984 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:37:30.678178 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:37:30.680540 systemd[1]: Stopped target network.target - Network. Oct 13 05:37:30.683488 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:37:30.683557 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:37:30.688741 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:37:30.688832 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:37:30.693563 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:37:30.693674 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:37:30.698815 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:37:30.698887 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:37:30.700910 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:37:30.704585 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:37:30.720218 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:37:30.720415 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:37:30.727407 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:37:30.727541 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:37:30.735371 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:37:30.735538 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:37:30.735589 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:37:30.741919 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:37:30.742841 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:37:30.742916 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:37:30.747033 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:37:30.747092 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:37:30.751522 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:37:30.751583 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:37:30.753583 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:37:30.772644 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:37:30.772841 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:37:30.777767 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:37:30.777877 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:37:30.795676 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:37:30.805800 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:37:30.808529 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:37:30.808577 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:37:30.812459 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:37:30.812502 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:37:30.814234 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:37:30.814292 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:37:30.820762 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:37:30.820818 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:37:30.825841 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:37:30.825908 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:37:30.832040 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:37:30.834819 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:37:30.834887 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:37:30.838540 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:37:30.838591 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:37:30.840787 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 05:37:30.840840 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:37:30.846646 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:37:30.846701 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:37:30.850427 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:37:30.850486 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:37:30.853330 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:37:30.862749 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:37:30.870379 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:37:30.870521 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:37:30.874376 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:37:30.880474 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:37:30.907508 systemd[1]: Switching root. Oct 13 05:37:30.943748 systemd-journald[330]: Journal stopped Oct 13 05:37:32.389893 systemd-journald[330]: Received SIGTERM from PID 1 (systemd). Oct 13 05:37:32.389962 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:37:32.389980 kernel: SELinux: policy capability open_perms=1 Oct 13 05:37:32.389996 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:37:32.390009 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:37:32.390021 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:37:32.390036 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:37:32.390048 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:37:32.390060 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:37:32.390072 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:37:32.390084 kernel: audit: type=1403 audit(1760333851.489:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:37:32.390097 systemd[1]: Successfully loaded SELinux policy in 71.878ms. Oct 13 05:37:32.390120 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.024ms. Oct 13 05:37:32.390136 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:37:32.390150 systemd[1]: Detected virtualization kvm. Oct 13 05:37:32.390163 systemd[1]: Detected architecture x86-64. Oct 13 05:37:32.390176 systemd[1]: Detected first boot. Oct 13 05:37:32.390188 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:37:32.390201 zram_generator::config[1185]: No configuration found. Oct 13 05:37:32.390218 kernel: Guest personality initialized and is inactive Oct 13 05:37:32.390231 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:37:32.390243 kernel: Initialized host personality Oct 13 05:37:32.390256 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:37:32.390269 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:37:32.390282 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:37:32.390294 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:37:32.390310 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:37:32.390327 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:37:32.390341 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:37:32.390353 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:37:32.390366 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:37:32.390379 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:37:32.390392 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:37:32.390407 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:37:32.390420 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:37:32.390433 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:37:32.390446 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:37:32.390464 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:37:32.390477 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:37:32.390491 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:37:32.390507 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:37:32.390520 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:37:32.390533 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:37:32.390546 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:37:32.390565 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:37:32.390577 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:37:32.390593 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:37:32.390606 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:37:32.390619 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:37:32.390647 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:37:32.390660 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:37:32.390673 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:37:32.390685 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:37:32.390699 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:37:32.390715 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:37:32.390727 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:37:32.390740 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:37:32.390752 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:37:32.390765 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:37:32.390778 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:37:32.390791 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:37:32.390814 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:37:32.390828 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:37:32.390840 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:37:32.390853 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:37:32.390867 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:37:32.390880 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:37:32.390896 systemd[1]: Reached target machines.target - Containers. Oct 13 05:37:32.390909 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:37:32.390922 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:37:32.390935 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:37:32.390948 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:37:32.390961 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:37:32.390973 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:37:32.390988 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:37:32.391001 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:37:32.391014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:37:32.391027 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:37:32.391040 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:37:32.391053 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:37:32.391065 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:37:32.391080 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:37:32.391094 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:37:32.391106 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:37:32.391119 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:37:32.391132 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:37:32.391146 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:37:32.391159 kernel: ACPI: bus type drm_connector registered Oct 13 05:37:32.391173 kernel: fuse: init (API version 7.41) Oct 13 05:37:32.391186 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:37:32.391199 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:37:32.391212 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:37:32.391227 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:37:32.391258 systemd-journald[1263]: Collecting audit messages is disabled. Oct 13 05:37:32.391285 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:37:32.391298 systemd-journald[1263]: Journal started Oct 13 05:37:32.391320 systemd-journald[1263]: Runtime Journal (/run/log/journal/b97031c532454d9bae01b892123c3642) is 6M, max 48.6M, 42.5M free. Oct 13 05:37:32.393042 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:37:32.076509 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:37:32.098218 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 05:37:32.098890 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:37:32.397503 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:37:32.400305 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:37:32.402225 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:37:32.404181 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:37:32.406377 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:37:32.408715 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:37:32.410990 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:37:32.411213 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:37:32.413372 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:37:32.413594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:37:32.416015 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:37:32.416235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:37:32.418211 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:37:32.418427 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:37:32.420685 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:37:32.420914 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:37:32.422942 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:37:32.423153 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:37:32.425228 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:37:32.427424 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:37:32.430641 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:37:32.433102 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:37:32.449349 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:37:32.451768 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 05:37:32.454970 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:37:32.457733 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:37:32.459507 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:37:32.459535 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:37:32.462088 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:37:32.464230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:37:32.467927 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:37:32.471422 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:37:32.473414 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:37:32.475505 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:37:32.477326 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:37:32.478773 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:37:32.484518 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:37:32.485802 systemd-journald[1263]: Time spent on flushing to /var/log/journal/b97031c532454d9bae01b892123c3642 is 13.297ms for 978 entries. Oct 13 05:37:32.485802 systemd-journald[1263]: System Journal (/var/log/journal/b97031c532454d9bae01b892123c3642) is 8M, max 163.5M, 155.5M free. Oct 13 05:37:32.518968 systemd-journald[1263]: Received client request to flush runtime journal. Oct 13 05:37:32.490936 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:37:32.494718 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:37:32.506943 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:37:32.509645 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:37:32.512374 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:37:32.516922 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:37:32.520655 kernel: loop1: detected capacity change from 0 to 110984 Oct 13 05:37:32.520956 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:37:32.526406 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:37:32.530042 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:37:32.541022 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Oct 13 05:37:32.541041 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Oct 13 05:37:32.546150 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:37:32.550132 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:37:32.557666 kernel: loop2: detected capacity change from 0 to 128048 Oct 13 05:37:32.567940 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:37:32.586884 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:37:32.590649 kernel: loop3: detected capacity change from 0 to 219144 Oct 13 05:37:32.592698 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:37:32.597782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:37:32.606684 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:37:32.612648 kernel: loop4: detected capacity change from 0 to 110984 Oct 13 05:37:32.622076 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Oct 13 05:37:32.622485 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Oct 13 05:37:32.624660 kernel: loop5: detected capacity change from 0 to 128048 Oct 13 05:37:32.628332 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:37:32.638651 kernel: loop6: detected capacity change from 0 to 219144 Oct 13 05:37:32.645652 (sd-merge)[1329]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 13 05:37:32.650525 (sd-merge)[1329]: Merged extensions into '/usr'. Oct 13 05:37:32.655993 systemd[1]: Reload requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:37:32.656014 systemd[1]: Reloading... Oct 13 05:37:32.720664 zram_generator::config[1363]: No configuration found. Oct 13 05:37:32.770746 systemd-resolved[1324]: Positive Trust Anchors: Oct 13 05:37:32.770763 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:37:32.770768 systemd-resolved[1324]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:37:32.770807 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:37:32.775428 systemd-resolved[1324]: Defaulting to hostname 'linux'. Oct 13 05:37:32.904025 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:37:32.904337 systemd[1]: Reloading finished in 247 ms. Oct 13 05:37:32.941293 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:37:32.943485 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:37:32.945715 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:37:32.950402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:37:32.973121 systemd[1]: Starting ensure-sysext.service... Oct 13 05:37:32.975999 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:37:32.994486 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:37:32.994538 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:37:32.995119 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:37:32.995488 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:37:32.996833 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:37:32.997225 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Oct 13 05:37:32.997328 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Oct 13 05:37:32.999958 systemd[1]: Reload requested from client PID 1399 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:37:32.999984 systemd[1]: Reloading... Oct 13 05:37:33.003884 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:37:33.003898 systemd-tmpfiles[1400]: Skipping /boot Oct 13 05:37:33.015555 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:37:33.015569 systemd-tmpfiles[1400]: Skipping /boot Oct 13 05:37:33.082728 zram_generator::config[1430]: No configuration found. Oct 13 05:37:33.307710 systemd[1]: Reloading finished in 307 ms. Oct 13 05:37:33.329758 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:37:33.359325 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:37:33.370936 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:37:33.373906 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:37:33.389509 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:37:33.394055 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:37:33.399929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:37:33.404920 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:37:33.411342 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:37:33.411514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:37:33.413177 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:37:33.418186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:37:33.435723 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:37:33.437934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:37:33.438077 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:37:33.438207 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:37:33.441439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:37:33.441777 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:37:33.446696 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:37:33.453686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:37:33.454906 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:37:33.458047 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:37:33.458313 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:37:33.469281 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:37:33.469717 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:37:33.471435 systemd-udevd[1474]: Using default interface naming scheme 'v257'. Oct 13 05:37:33.473117 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:37:33.479904 augenrules[1502]: No rules Oct 13 05:37:33.481713 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:37:33.482412 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:37:33.488530 systemd[1]: Finished ensure-sysext.service. Oct 13 05:37:33.493492 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:37:33.494391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:37:33.495853 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:37:33.499621 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:37:33.508092 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:37:33.512052 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:37:33.514359 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:37:33.514404 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:37:33.517258 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:37:33.519955 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:37:33.521607 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:37:33.522221 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:37:33.527009 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:37:33.530024 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:37:33.533270 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:37:33.533562 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:37:33.535832 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:37:33.536824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:37:33.539528 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:37:33.539802 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:37:33.560056 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:37:33.562794 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:37:33.562888 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:37:33.562927 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:37:33.667667 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:37:33.671129 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:37:33.673562 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:37:33.674838 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:37:33.678885 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:37:33.681458 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:37:33.682525 systemd-networkd[1538]: lo: Link UP Oct 13 05:37:33.682540 systemd-networkd[1538]: lo: Gained carrier Oct 13 05:37:33.684792 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:37:33.685033 systemd-networkd[1538]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:37:33.685038 systemd-networkd[1538]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:37:33.685809 systemd-networkd[1538]: eth0: Link UP Oct 13 05:37:33.686033 systemd-networkd[1538]: eth0: Gained carrier Oct 13 05:37:33.686050 systemd-networkd[1538]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:37:33.687059 systemd[1]: Reached target network.target - Network. Oct 13 05:37:33.690424 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:37:33.694210 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:37:33.702710 systemd-networkd[1538]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:37:33.703608 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Oct 13 05:37:34.534026 systemd-timesyncd[1513]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 05:37:34.534165 systemd-timesyncd[1513]: Initial clock synchronization to Mon 2025-10-13 05:37:34.532136 UTC. Oct 13 05:37:34.535110 systemd-resolved[1324]: Clock change detected. Flushing caches. Oct 13 05:37:34.547822 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 13 05:37:34.553862 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 13 05:37:34.554201 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:37:34.554240 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 13 05:37:34.557206 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:37:34.560261 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:37:34.599516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:37:34.752009 kernel: kvm_amd: TSC scaling supported Oct 13 05:37:34.752092 kernel: kvm_amd: Nested Virtualization enabled Oct 13 05:37:34.752143 kernel: kvm_amd: Nested Paging enabled Oct 13 05:37:34.752869 kernel: kvm_amd: LBR virtualization supported Oct 13 05:37:34.754597 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 13 05:37:34.754625 kernel: kvm_amd: Virtual GIF supported Oct 13 05:37:34.786866 kernel: EDAC MC: Ver: 3.0.0 Oct 13 05:37:34.842384 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:37:34.848998 ldconfig[1471]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:37:34.855799 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:37:34.859243 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:37:34.891757 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:37:34.893965 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:37:34.896034 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:37:34.898147 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:37:34.900222 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:37:34.902515 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:37:34.904462 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:37:34.906539 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:37:34.908642 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:37:34.908676 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:37:34.910256 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:37:34.912935 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:37:34.916717 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:37:34.920703 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:37:34.922994 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:37:34.925363 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:37:34.932077 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:37:34.934057 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:37:34.936505 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:37:34.938952 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:37:34.940523 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:37:34.942164 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:37:34.942195 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:37:34.943350 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:37:34.946101 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:37:34.948691 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:37:34.952014 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:37:34.955021 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:37:34.957729 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:37:34.960101 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:37:34.961807 jq[1592]: false Oct 13 05:37:34.963396 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:37:34.967928 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:37:34.970660 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:37:34.971283 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Refreshing passwd entry cache Oct 13 05:37:34.971121 oslogin_cache_refresh[1594]: Refreshing passwd entry cache Oct 13 05:37:34.976621 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:37:34.979270 extend-filesystems[1593]: Found /dev/vda6 Oct 13 05:37:34.981064 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Failure getting users, quitting Oct 13 05:37:34.981064 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:37:34.981064 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Refreshing group entry cache Oct 13 05:37:34.979469 oslogin_cache_refresh[1594]: Failure getting users, quitting Oct 13 05:37:34.979491 oslogin_cache_refresh[1594]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:37:34.979564 oslogin_cache_refresh[1594]: Refreshing group entry cache Oct 13 05:37:34.981611 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:37:34.983349 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:37:34.984019 extend-filesystems[1593]: Found /dev/vda9 Oct 13 05:37:34.983809 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:37:34.985523 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:37:34.988449 oslogin_cache_refresh[1594]: Failure getting groups, quitting Oct 13 05:37:34.988612 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Failure getting groups, quitting Oct 13 05:37:34.988612 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:37:34.988460 oslogin_cache_refresh[1594]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:37:34.989288 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:37:34.995105 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:37:34.995794 extend-filesystems[1593]: Checking size of /dev/vda9 Oct 13 05:37:35.003102 jq[1608]: true Oct 13 05:37:34.997501 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:37:34.997760 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:37:34.998117 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:37:34.998356 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:37:35.001878 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:37:35.002249 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:37:35.006792 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:37:35.007813 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:37:35.024467 update_engine[1607]: I20251013 05:37:35.023808 1607 main.cc:92] Flatcar Update Engine starting Oct 13 05:37:35.028170 (ntainerd)[1626]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:37:35.030244 extend-filesystems[1593]: Resized partition /dev/vda9 Oct 13 05:37:35.033519 jq[1625]: true Oct 13 05:37:35.033698 tar[1619]: linux-amd64/LICENSE Oct 13 05:37:35.033698 tar[1619]: linux-amd64/helm Oct 13 05:37:35.040048 extend-filesystems[1639]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:37:35.051854 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 13 05:37:35.084880 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 13 05:37:35.115364 dbus-daemon[1590]: [system] SELinux support is enabled Oct 13 05:37:35.145702 update_engine[1607]: I20251013 05:37:35.118577 1607 update_check_scheduler.cc:74] Next update check in 8m56s Oct 13 05:37:35.115939 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:37:35.145810 extend-filesystems[1639]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 05:37:35.145810 extend-filesystems[1639]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 05:37:35.145810 extend-filesystems[1639]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 13 05:37:35.120102 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:37:35.172100 bash[1657]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:37:35.172208 extend-filesystems[1593]: Resized filesystem in /dev/vda9 Oct 13 05:37:35.120125 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:37:35.124058 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:37:35.124076 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:37:35.126415 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:37:35.129652 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:37:35.144282 systemd-logind[1604]: Watching system buttons on /dev/input/event2 (Power Button) Oct 13 05:37:35.144305 systemd-logind[1604]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:37:35.145063 systemd-logind[1604]: New seat seat0. Oct 13 05:37:35.153615 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:37:35.153937 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:37:35.164702 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:37:35.165234 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:37:35.172040 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:37:35.182270 locksmithd[1658]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:37:35.292311 containerd[1626]: time="2025-10-13T05:37:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:37:35.293377 containerd[1626]: time="2025-10-13T05:37:35.293314344Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:37:35.305922 containerd[1626]: time="2025-10-13T05:37:35.305284498Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.468µs" Oct 13 05:37:35.305922 containerd[1626]: time="2025-10-13T05:37:35.305330144Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:37:35.305922 containerd[1626]: time="2025-10-13T05:37:35.305353928Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:37:35.305922 containerd[1626]: time="2025-10-13T05:37:35.305558191Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:37:35.305922 containerd[1626]: time="2025-10-13T05:37:35.305573049Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:37:35.305922 containerd[1626]: time="2025-10-13T05:37:35.305599428Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:37:35.305922 containerd[1626]: time="2025-10-13T05:37:35.305660663Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:37:35.305922 containerd[1626]: time="2025-10-13T05:37:35.305672345Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:37:35.306230 containerd[1626]: time="2025-10-13T05:37:35.306209663Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:37:35.306278 containerd[1626]: time="2025-10-13T05:37:35.306266850Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:37:35.306334 containerd[1626]: time="2025-10-13T05:37:35.306320450Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:37:35.306376 containerd[1626]: time="2025-10-13T05:37:35.306365996Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:37:35.306537 containerd[1626]: time="2025-10-13T05:37:35.306519143Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:37:35.306827 containerd[1626]: time="2025-10-13T05:37:35.306808215Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:37:35.306935 containerd[1626]: time="2025-10-13T05:37:35.306919484Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:37:35.306989 containerd[1626]: time="2025-10-13T05:37:35.306977743Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:37:35.307080 containerd[1626]: time="2025-10-13T05:37:35.307065557Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:37:35.307482 containerd[1626]: time="2025-10-13T05:37:35.307419671Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:37:35.307622 containerd[1626]: time="2025-10-13T05:37:35.307598477Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:37:35.391452 tar[1619]: linux-amd64/README.md Oct 13 05:37:35.413246 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:37:35.516809 sshd_keygen[1623]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:37:35.540875 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:37:35.543476 containerd[1626]: time="2025-10-13T05:37:35.543188989Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:37:35.543476 containerd[1626]: time="2025-10-13T05:37:35.543288495Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:37:35.543476 containerd[1626]: time="2025-10-13T05:37:35.543307050Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:37:35.543476 containerd[1626]: time="2025-10-13T05:37:35.543322028Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:37:35.543476 containerd[1626]: time="2025-10-13T05:37:35.543339391Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:37:35.543476 containerd[1626]: time="2025-10-13T05:37:35.543352335Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:37:35.543476 containerd[1626]: time="2025-10-13T05:37:35.543370800Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:37:35.543476 containerd[1626]: time="2025-10-13T05:37:35.543385277Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:37:35.543476 containerd[1626]: time="2025-10-13T05:37:35.543399423Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:37:35.543476 containerd[1626]: time="2025-10-13T05:37:35.543411486Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:37:35.543476 containerd[1626]: time="2025-10-13T05:37:35.543423989Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:37:35.543476 containerd[1626]: time="2025-10-13T05:37:35.543439689Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:37:35.543865 containerd[1626]: time="2025-10-13T05:37:35.543611882Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:37:35.543865 containerd[1626]: time="2025-10-13T05:37:35.543636558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:37:35.543865 containerd[1626]: time="2025-10-13T05:37:35.543663218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:37:35.543865 containerd[1626]: time="2025-10-13T05:37:35.543679238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:37:35.543865 containerd[1626]: time="2025-10-13T05:37:35.543692874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:37:35.543865 containerd[1626]: time="2025-10-13T05:37:35.543738189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:37:35.543865 containerd[1626]: time="2025-10-13T05:37:35.543757505Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:37:35.543865 containerd[1626]: time="2025-10-13T05:37:35.543768746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:37:35.543865 containerd[1626]: time="2025-10-13T05:37:35.543781179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:37:35.543865 containerd[1626]: time="2025-10-13T05:37:35.543794434Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:37:35.543865 containerd[1626]: time="2025-10-13T05:37:35.543808340Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:37:35.544080 containerd[1626]: time="2025-10-13T05:37:35.543971877Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:37:35.544080 containerd[1626]: time="2025-10-13T05:37:35.543992716Z" level=info msg="Start snapshots syncer" Oct 13 05:37:35.544080 containerd[1626]: time="2025-10-13T05:37:35.544032120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:37:35.544374 containerd[1626]: time="2025-10-13T05:37:35.544319599Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:37:35.544374 containerd[1626]: time="2025-10-13T05:37:35.544389931Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:37:35.544601 containerd[1626]: time="2025-10-13T05:37:35.544474289Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:37:35.544624 containerd[1626]: time="2025-10-13T05:37:35.544595436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:37:35.544645 containerd[1626]: time="2025-10-13T05:37:35.544626805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:37:35.544666 containerd[1626]: time="2025-10-13T05:37:35.544642444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:37:35.544666 containerd[1626]: time="2025-10-13T05:37:35.544655649Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:37:35.544709 containerd[1626]: time="2025-10-13T05:37:35.544670086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:37:35.544709 containerd[1626]: time="2025-10-13T05:37:35.544682369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:37:35.544689 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:37:35.546244 containerd[1626]: time="2025-10-13T05:37:35.544708167Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:37:35.546244 containerd[1626]: time="2025-10-13T05:37:35.544736480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:37:35.546244 containerd[1626]: time="2025-10-13T05:37:35.544747691Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:37:35.546244 containerd[1626]: time="2025-10-13T05:37:35.544760706Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:37:35.546244 containerd[1626]: time="2025-10-13T05:37:35.544802624Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:37:35.546244 containerd[1626]: time="2025-10-13T05:37:35.544824085Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:37:35.546244 containerd[1626]: time="2025-10-13T05:37:35.544870672Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:37:35.546244 containerd[1626]: time="2025-10-13T05:37:35.544883065Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:37:35.546244 containerd[1626]: time="2025-10-13T05:37:35.544892373Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:37:35.546244 containerd[1626]: time="2025-10-13T05:37:35.544916738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:37:35.546543 containerd[1626]: time="2025-10-13T05:37:35.546293339Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:37:35.546543 containerd[1626]: time="2025-10-13T05:37:35.546334597Z" level=info msg="runtime interface created" Oct 13 05:37:35.546543 containerd[1626]: time="2025-10-13T05:37:35.546351058Z" level=info msg="created NRI interface" Oct 13 05:37:35.546543 containerd[1626]: time="2025-10-13T05:37:35.546364653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:37:35.546543 containerd[1626]: time="2025-10-13T05:37:35.546385733Z" level=info msg="Connect containerd service" Oct 13 05:37:35.546543 containerd[1626]: time="2025-10-13T05:37:35.546428994Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:37:35.547418 containerd[1626]: time="2025-10-13T05:37:35.547380939Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:37:35.564432 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:37:35.564756 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:37:35.568368 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:37:35.592561 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:37:35.596444 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:37:35.599709 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:37:35.601630 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:37:35.653851 containerd[1626]: time="2025-10-13T05:37:35.653769457Z" level=info msg="Start subscribing containerd event" Oct 13 05:37:35.653965 containerd[1626]: time="2025-10-13T05:37:35.653878472Z" level=info msg="Start recovering state" Oct 13 05:37:35.654013 containerd[1626]: time="2025-10-13T05:37:35.654000570Z" level=info msg="Start event monitor" Oct 13 05:37:35.654053 containerd[1626]: time="2025-10-13T05:37:35.654005991Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:37:35.654128 containerd[1626]: time="2025-10-13T05:37:35.654094426Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:37:35.654128 containerd[1626]: time="2025-10-13T05:37:35.654020167Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:37:35.654128 containerd[1626]: time="2025-10-13T05:37:35.654125074Z" level=info msg="Start streaming server" Oct 13 05:37:35.654128 containerd[1626]: time="2025-10-13T05:37:35.654138990Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:37:35.654128 containerd[1626]: time="2025-10-13T05:37:35.654146314Z" level=info msg="runtime interface starting up..." Oct 13 05:37:35.654128 containerd[1626]: time="2025-10-13T05:37:35.654152415Z" level=info msg="starting plugins..." Oct 13 05:37:35.654366 containerd[1626]: time="2025-10-13T05:37:35.654177041Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:37:35.654366 containerd[1626]: time="2025-10-13T05:37:35.654323646Z" level=info msg="containerd successfully booted in 0.362847s" Oct 13 05:37:35.654525 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:37:36.015114 systemd-networkd[1538]: eth0: Gained IPv6LL Oct 13 05:37:36.018385 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:37:36.021061 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:37:36.024392 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 05:37:36.027473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:37:36.030315 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:37:36.069112 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:37:36.071580 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 05:37:36.071870 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 05:37:36.074629 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:37:36.726027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:37:36.728363 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:37:36.730288 systemd[1]: Startup finished in 2.760s (kernel) + 7.436s (initrd) + 4.483s (userspace) = 14.680s. Oct 13 05:37:36.780188 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:37:37.035341 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:37:37.036639 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:59832.service - OpenSSH per-connection server daemon (10.0.0.1:59832). Oct 13 05:37:37.124134 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 59832 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:37:37.126551 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:37.135370 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:37:37.136824 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:37:37.144030 systemd-logind[1604]: New session 1 of user core. Oct 13 05:37:37.162915 kubelet[1730]: E1013 05:37:37.162736 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:37:37.164213 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:37:37.167110 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:37:37.167395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:37:37.167627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:37:37.168061 systemd[1]: kubelet.service: Consumed 935ms CPU time, 256.5M memory peak. Oct 13 05:37:37.197817 (systemd)[1748]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:37:37.200468 systemd-logind[1604]: New session c1 of user core. Oct 13 05:37:37.349773 systemd[1748]: Queued start job for default target default.target. Oct 13 05:37:37.365147 systemd[1748]: Created slice app.slice - User Application Slice. Oct 13 05:37:37.365179 systemd[1748]: Reached target paths.target - Paths. Oct 13 05:37:37.365245 systemd[1748]: Reached target timers.target - Timers. Oct 13 05:37:37.366912 systemd[1748]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:37:37.378984 systemd[1748]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:37:37.379117 systemd[1748]: Reached target sockets.target - Sockets. Oct 13 05:37:37.379161 systemd[1748]: Reached target basic.target - Basic System. Oct 13 05:37:37.379216 systemd[1748]: Reached target default.target - Main User Target. Oct 13 05:37:37.379258 systemd[1748]: Startup finished in 169ms. Oct 13 05:37:37.379609 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:37:37.381209 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:37:37.445155 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:59846.service - OpenSSH per-connection server daemon (10.0.0.1:59846). Oct 13 05:37:37.504505 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 59846 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:37:37.505864 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:37.510481 systemd-logind[1604]: New session 2 of user core. Oct 13 05:37:37.521972 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:37:37.575054 sshd[1763]: Connection closed by 10.0.0.1 port 59846 Oct 13 05:37:37.575314 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Oct 13 05:37:37.591624 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:59846.service: Deactivated successfully. Oct 13 05:37:37.593474 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 05:37:37.594233 systemd-logind[1604]: Session 2 logged out. Waiting for processes to exit. Oct 13 05:37:37.596726 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:59852.service - OpenSSH per-connection server daemon (10.0.0.1:59852). Oct 13 05:37:37.597484 systemd-logind[1604]: Removed session 2. Oct 13 05:37:37.659206 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 59852 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:37:37.660725 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:37.665027 systemd-logind[1604]: New session 3 of user core. Oct 13 05:37:37.682993 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:37:37.732391 sshd[1772]: Connection closed by 10.0.0.1 port 59852 Oct 13 05:37:37.732678 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Oct 13 05:37:37.748558 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:59852.service: Deactivated successfully. Oct 13 05:37:37.750570 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 05:37:37.751451 systemd-logind[1604]: Session 3 logged out. Waiting for processes to exit. Oct 13 05:37:37.754277 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:59854.service - OpenSSH per-connection server daemon (10.0.0.1:59854). Oct 13 05:37:37.755092 systemd-logind[1604]: Removed session 3. Oct 13 05:37:37.819301 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 59854 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:37:37.820516 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:37.825240 systemd-logind[1604]: New session 4 of user core. Oct 13 05:37:37.834970 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:37:37.888997 sshd[1781]: Connection closed by 10.0.0.1 port 59854 Oct 13 05:37:37.889334 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Oct 13 05:37:37.913524 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:59854.service: Deactivated successfully. Oct 13 05:37:37.915427 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:37:37.916267 systemd-logind[1604]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:37:37.918790 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:59860.service - OpenSSH per-connection server daemon (10.0.0.1:59860). Oct 13 05:37:37.919403 systemd-logind[1604]: Removed session 4. Oct 13 05:37:37.974095 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 59860 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:37:37.975338 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:37.979868 systemd-logind[1604]: New session 5 of user core. Oct 13 05:37:38.000982 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:37:38.063020 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:37:38.063341 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:37:38.079352 sudo[1791]: pam_unix(sudo:session): session closed for user root Oct 13 05:37:38.081224 sshd[1790]: Connection closed by 10.0.0.1 port 59860 Oct 13 05:37:38.081579 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Oct 13 05:37:38.097532 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:59860.service: Deactivated successfully. Oct 13 05:37:38.099384 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:37:38.100137 systemd-logind[1604]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:37:38.103221 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:59868.service - OpenSSH per-connection server daemon (10.0.0.1:59868). Oct 13 05:37:38.103728 systemd-logind[1604]: Removed session 5. Oct 13 05:37:38.160462 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 59868 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:37:38.162304 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:38.167413 systemd-logind[1604]: New session 6 of user core. Oct 13 05:37:38.175983 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:37:38.233101 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:37:38.233402 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:37:38.241331 sudo[1802]: pam_unix(sudo:session): session closed for user root Oct 13 05:37:38.251259 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:37:38.251798 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:37:38.264215 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:37:38.320905 augenrules[1824]: No rules Oct 13 05:37:38.322527 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:37:38.322802 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:37:38.323996 sudo[1801]: pam_unix(sudo:session): session closed for user root Oct 13 05:37:38.325754 sshd[1800]: Connection closed by 10.0.0.1 port 59868 Oct 13 05:37:38.326148 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Oct 13 05:37:38.334633 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:59868.service: Deactivated successfully. Oct 13 05:37:38.336543 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:37:38.337423 systemd-logind[1604]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:37:38.340140 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:59882.service - OpenSSH per-connection server daemon (10.0.0.1:59882). Oct 13 05:37:38.340882 systemd-logind[1604]: Removed session 6. Oct 13 05:37:38.402603 sshd[1833]: Accepted publickey for core from 10.0.0.1 port 59882 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:37:38.404282 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:37:38.408496 systemd-logind[1604]: New session 7 of user core. Oct 13 05:37:38.418965 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:37:38.475706 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:37:38.476298 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:37:38.847943 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:37:38.870158 (dockerd)[1857]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:37:39.140679 dockerd[1857]: time="2025-10-13T05:37:39.140540352Z" level=info msg="Starting up" Oct 13 05:37:39.141259 dockerd[1857]: time="2025-10-13T05:37:39.141232329Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:37:39.152287 dockerd[1857]: time="2025-10-13T05:37:39.152244617Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:37:40.010395 dockerd[1857]: time="2025-10-13T05:37:40.010300161Z" level=info msg="Loading containers: start." Oct 13 05:37:40.021899 kernel: Initializing XFRM netlink socket Oct 13 05:37:40.893147 systemd-networkd[1538]: docker0: Link UP Oct 13 05:37:40.898567 dockerd[1857]: time="2025-10-13T05:37:40.898496689Z" level=info msg="Loading containers: done." Oct 13 05:37:40.918601 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1744081796-merged.mount: Deactivated successfully. Oct 13 05:37:40.927740 dockerd[1857]: time="2025-10-13T05:37:40.927652346Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:37:40.927824 dockerd[1857]: time="2025-10-13T05:37:40.927802658Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:37:40.927999 dockerd[1857]: time="2025-10-13T05:37:40.927966866Z" level=info msg="Initializing buildkit" Oct 13 05:37:40.965505 dockerd[1857]: time="2025-10-13T05:37:40.965439617Z" level=info msg="Completed buildkit initialization" Oct 13 05:37:40.972367 dockerd[1857]: time="2025-10-13T05:37:40.972300562Z" level=info msg="Daemon has completed initialization" Oct 13 05:37:40.972565 dockerd[1857]: time="2025-10-13T05:37:40.972408464Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:37:40.972688 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:37:41.854522 containerd[1626]: time="2025-10-13T05:37:41.854437448Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 13 05:37:42.408733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount35036726.mount: Deactivated successfully. Oct 13 05:37:43.798211 containerd[1626]: time="2025-10-13T05:37:43.798132486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:43.800402 containerd[1626]: time="2025-10-13T05:37:43.800368889Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 13 05:37:43.801867 containerd[1626]: time="2025-10-13T05:37:43.801806174Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:43.805268 containerd[1626]: time="2025-10-13T05:37:43.805209345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:43.806435 containerd[1626]: time="2025-10-13T05:37:43.806393024Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.951902206s" Oct 13 05:37:43.806499 containerd[1626]: time="2025-10-13T05:37:43.806438720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 13 05:37:43.807204 containerd[1626]: time="2025-10-13T05:37:43.807162226Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 13 05:37:45.566553 containerd[1626]: time="2025-10-13T05:37:45.566476640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:45.567506 containerd[1626]: time="2025-10-13T05:37:45.567473259Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 13 05:37:45.569156 containerd[1626]: time="2025-10-13T05:37:45.569122031Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:45.572753 containerd[1626]: time="2025-10-13T05:37:45.572676776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:45.573778 containerd[1626]: time="2025-10-13T05:37:45.573717126Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.766525996s" Oct 13 05:37:45.573829 containerd[1626]: time="2025-10-13T05:37:45.573777019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 13 05:37:45.574461 containerd[1626]: time="2025-10-13T05:37:45.574352087Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 13 05:37:46.882671 containerd[1626]: time="2025-10-13T05:37:46.882567180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:46.883992 containerd[1626]: time="2025-10-13T05:37:46.883870193Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 13 05:37:46.888352 containerd[1626]: time="2025-10-13T05:37:46.888292064Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:46.893240 containerd[1626]: time="2025-10-13T05:37:46.893195668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:46.894672 containerd[1626]: time="2025-10-13T05:37:46.894589652Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.320195326s" Oct 13 05:37:46.894672 containerd[1626]: time="2025-10-13T05:37:46.894649504Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 13 05:37:46.895228 containerd[1626]: time="2025-10-13T05:37:46.895180239Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 05:37:47.418233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:37:47.420177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:37:47.657763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:37:47.662166 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:37:47.877473 kubelet[2150]: E1013 05:37:47.877290 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:37:47.884011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:37:47.884220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:37:47.884691 systemd[1]: kubelet.service: Consumed 340ms CPU time, 110.5M memory peak. Oct 13 05:37:48.996432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206175132.mount: Deactivated successfully. Oct 13 05:37:49.699482 containerd[1626]: time="2025-10-13T05:37:49.699400432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:49.700702 containerd[1626]: time="2025-10-13T05:37:49.700630738Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 13 05:37:49.709129 containerd[1626]: time="2025-10-13T05:37:49.709081363Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:49.711404 containerd[1626]: time="2025-10-13T05:37:49.711337123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:49.712145 containerd[1626]: time="2025-10-13T05:37:49.712097358Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.816879939s" Oct 13 05:37:49.712145 containerd[1626]: time="2025-10-13T05:37:49.712128176Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 13 05:37:49.712617 containerd[1626]: time="2025-10-13T05:37:49.712581035Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 05:37:50.321395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2365871374.mount: Deactivated successfully. Oct 13 05:37:52.217179 containerd[1626]: time="2025-10-13T05:37:52.217123231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:52.233750 containerd[1626]: time="2025-10-13T05:37:52.233718497Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 13 05:37:52.262985 containerd[1626]: time="2025-10-13T05:37:52.262935088Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:52.287504 containerd[1626]: time="2025-10-13T05:37:52.287449503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:52.288620 containerd[1626]: time="2025-10-13T05:37:52.288560316Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.575951388s" Oct 13 05:37:52.288620 containerd[1626]: time="2025-10-13T05:37:52.288610961Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 13 05:37:52.289137 containerd[1626]: time="2025-10-13T05:37:52.289106500Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 13 05:37:54.416021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439788541.mount: Deactivated successfully. Oct 13 05:37:54.422084 containerd[1626]: time="2025-10-13T05:37:54.422017237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:54.422977 containerd[1626]: time="2025-10-13T05:37:54.422935288Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 13 05:37:54.424284 containerd[1626]: time="2025-10-13T05:37:54.424251346Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:54.426477 containerd[1626]: time="2025-10-13T05:37:54.426441633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:54.427034 containerd[1626]: time="2025-10-13T05:37:54.426986664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 2.137857563s" Oct 13 05:37:54.427034 containerd[1626]: time="2025-10-13T05:37:54.427023664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 13 05:37:54.427577 containerd[1626]: time="2025-10-13T05:37:54.427525164Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 13 05:37:58.012469 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:37:58.024750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:37:58.375214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:37:58.403673 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:37:58.617536 kubelet[2272]: E1013 05:37:58.617464 2272 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:37:58.621820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:37:58.622100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:37:58.622586 systemd[1]: kubelet.service: Consumed 589ms CPU time, 110.2M memory peak. Oct 13 05:38:01.478602 containerd[1626]: time="2025-10-13T05:38:01.478527141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:01.517362 containerd[1626]: time="2025-10-13T05:38:01.517284130Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 13 05:38:01.556828 containerd[1626]: time="2025-10-13T05:38:01.556758495Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:01.604074 containerd[1626]: time="2025-10-13T05:38:01.604004782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:01.605463 containerd[1626]: time="2025-10-13T05:38:01.605406890Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 7.17783586s" Oct 13 05:38:01.605506 containerd[1626]: time="2025-10-13T05:38:01.605460531Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 13 05:38:05.798908 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:38:05.799085 systemd[1]: kubelet.service: Consumed 589ms CPU time, 110.2M memory peak. Oct 13 05:38:05.801283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:38:05.834182 systemd[1]: Reload requested from client PID 2314 ('systemctl') (unit session-7.scope)... Oct 13 05:38:05.834205 systemd[1]: Reloading... Oct 13 05:38:05.986877 zram_generator::config[2358]: No configuration found. Oct 13 05:38:07.057724 systemd[1]: Reloading finished in 1223 ms. Oct 13 05:38:07.121618 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:38:07.121721 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:38:07.122073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:38:07.122118 systemd[1]: kubelet.service: Consumed 167ms CPU time, 98.1M memory peak. Oct 13 05:38:07.123906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:38:07.313026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:38:07.324122 (kubelet)[2406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:38:07.381347 kubelet[2406]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:38:07.381347 kubelet[2406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:38:07.381743 kubelet[2406]: I1013 05:38:07.381369 2406 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:38:07.914585 kubelet[2406]: I1013 05:38:07.914536 2406 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:38:07.914585 kubelet[2406]: I1013 05:38:07.914565 2406 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:38:07.914585 kubelet[2406]: I1013 05:38:07.914593 2406 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:38:07.914585 kubelet[2406]: I1013 05:38:07.914599 2406 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:38:07.914826 kubelet[2406]: I1013 05:38:07.914809 2406 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:38:08.483702 kubelet[2406]: E1013 05:38:08.483617 2406 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:38:08.484243 kubelet[2406]: I1013 05:38:08.484001 2406 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:38:08.491370 kubelet[2406]: I1013 05:38:08.491344 2406 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:38:08.497070 kubelet[2406]: I1013 05:38:08.497005 2406 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:38:08.498493 kubelet[2406]: I1013 05:38:08.498434 2406 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:38:08.498614 kubelet[2406]: I1013 05:38:08.498466 2406 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:38:08.498614 kubelet[2406]: I1013 05:38:08.498615 2406 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:38:08.498807 kubelet[2406]: I1013 05:38:08.498625 2406 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:38:08.498807 kubelet[2406]: I1013 05:38:08.498740 2406 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:38:08.569528 kubelet[2406]: I1013 05:38:08.569440 2406 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:38:08.569718 kubelet[2406]: I1013 05:38:08.569688 2406 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:38:08.569718 kubelet[2406]: I1013 05:38:08.569704 2406 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:38:08.569771 kubelet[2406]: I1013 05:38:08.569726 2406 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:38:08.569771 kubelet[2406]: I1013 05:38:08.569755 2406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:38:08.600860 kubelet[2406]: E1013 05:38:08.599243 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:38:08.600860 kubelet[2406]: E1013 05:38:08.599972 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:38:08.602006 kubelet[2406]: I1013 05:38:08.601944 2406 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:38:08.602818 kubelet[2406]: I1013 05:38:08.602785 2406 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:38:08.602818 kubelet[2406]: I1013 05:38:08.602819 2406 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:38:08.602963 kubelet[2406]: W1013 05:38:08.602913 2406 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:38:08.607280 kubelet[2406]: I1013 05:38:08.607228 2406 server.go:1262] "Started kubelet" Oct 13 05:38:08.607345 kubelet[2406]: I1013 05:38:08.607312 2406 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:38:08.608258 kubelet[2406]: I1013 05:38:08.608223 2406 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:38:08.618312 kubelet[2406]: I1013 05:38:08.618043 2406 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:38:08.619001 kubelet[2406]: I1013 05:38:08.618920 2406 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:38:08.620704 kubelet[2406]: I1013 05:38:08.620632 2406 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:38:08.620901 kubelet[2406]: E1013 05:38:08.620880 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:08.621495 kubelet[2406]: E1013 05:38:08.621224 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Oct 13 05:38:08.621495 kubelet[2406]: I1013 05:38:08.621398 2406 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:38:08.621495 kubelet[2406]: I1013 05:38:08.621440 2406 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:38:08.621897 kubelet[2406]: I1013 05:38:08.621878 2406 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:38:08.622300 kubelet[2406]: I1013 05:38:08.622235 2406 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:38:08.622453 kubelet[2406]: I1013 05:38:08.622336 2406 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:38:08.622800 kubelet[2406]: E1013 05:38:08.620423 2406 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df65854834f7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:38:08.607178618 +0000 UTC m=+1.275267705,LastTimestamp:2025-10-13 05:38:08.607178618 +0000 UTC m=+1.275267705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:38:08.622800 kubelet[2406]: I1013 05:38:08.622758 2406 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:38:08.622990 kubelet[2406]: I1013 05:38:08.622982 2406 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:38:08.623413 kubelet[2406]: E1013 05:38:08.623383 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:38:08.623533 kubelet[2406]: E1013 05:38:08.623517 2406 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:38:08.624770 kubelet[2406]: I1013 05:38:08.624744 2406 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:38:08.637865 kubelet[2406]: I1013 05:38:08.637640 2406 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:38:08.639102 kubelet[2406]: I1013 05:38:08.639068 2406 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:38:08.639102 kubelet[2406]: I1013 05:38:08.639102 2406 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:38:08.639226 kubelet[2406]: I1013 05:38:08.639132 2406 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:38:08.639226 kubelet[2406]: E1013 05:38:08.639180 2406 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:38:08.640173 kubelet[2406]: E1013 05:38:08.640136 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:38:08.641250 kubelet[2406]: I1013 05:38:08.641203 2406 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:38:08.641250 kubelet[2406]: I1013 05:38:08.641217 2406 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:38:08.641250 kubelet[2406]: I1013 05:38:08.641233 2406 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:38:08.721206 kubelet[2406]: E1013 05:38:08.721128 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:08.739793 kubelet[2406]: E1013 05:38:08.739658 2406 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 05:38:08.745263 kubelet[2406]: I1013 05:38:08.745227 2406 policy_none.go:49] "None policy: Start" Oct 13 05:38:08.745263 kubelet[2406]: I1013 05:38:08.745259 2406 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:38:08.745263 kubelet[2406]: I1013 05:38:08.745283 2406 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:38:08.822019 kubelet[2406]: E1013 05:38:08.821931 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:08.822373 kubelet[2406]: E1013 05:38:08.822310 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Oct 13 05:38:08.923145 kubelet[2406]: E1013 05:38:08.923098 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:08.940674 kubelet[2406]: E1013 05:38:08.940611 2406 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 05:38:09.024268 kubelet[2406]: E1013 05:38:09.024209 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:09.024594 kubelet[2406]: I1013 05:38:09.024547 2406 policy_none.go:47] "Start" Oct 13 05:38:09.029310 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:38:09.049955 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:38:09.053031 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:38:09.073758 kubelet[2406]: E1013 05:38:09.073684 2406 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:38:09.073986 kubelet[2406]: I1013 05:38:09.073918 2406 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:38:09.073986 kubelet[2406]: I1013 05:38:09.073938 2406 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:38:09.074224 kubelet[2406]: I1013 05:38:09.074166 2406 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:38:09.075251 kubelet[2406]: E1013 05:38:09.075225 2406 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:38:09.075333 kubelet[2406]: E1013 05:38:09.075263 2406 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 05:38:09.175749 kubelet[2406]: I1013 05:38:09.175683 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:38:09.176082 kubelet[2406]: E1013 05:38:09.176056 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Oct 13 05:38:09.223925 kubelet[2406]: E1013 05:38:09.223867 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Oct 13 05:38:09.377061 kubelet[2406]: I1013 05:38:09.376955 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:38:09.377356 kubelet[2406]: E1013 05:38:09.377270 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Oct 13 05:38:09.427868 kubelet[2406]: I1013 05:38:09.427784 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56c0103eb97f538feea14d587ebe9cab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"56c0103eb97f538feea14d587ebe9cab\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:38:09.427868 kubelet[2406]: I1013 05:38:09.427847 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56c0103eb97f538feea14d587ebe9cab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"56c0103eb97f538feea14d587ebe9cab\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:38:09.428041 kubelet[2406]: I1013 05:38:09.427895 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56c0103eb97f538feea14d587ebe9cab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"56c0103eb97f538feea14d587ebe9cab\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:38:09.468552 kubelet[2406]: E1013 05:38:09.468497 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:38:09.763930 kubelet[2406]: E1013 05:38:09.763874 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:38:09.779017 kubelet[2406]: I1013 05:38:09.778960 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:38:09.779300 kubelet[2406]: E1013 05:38:09.779247 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Oct 13 05:38:09.922726 kubelet[2406]: E1013 05:38:09.922660 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:38:09.965472 systemd[1]: Created slice kubepods-burstable-pod56c0103eb97f538feea14d587ebe9cab.slice - libcontainer container kubepods-burstable-pod56c0103eb97f538feea14d587ebe9cab.slice. Oct 13 05:38:09.978994 kubelet[2406]: E1013 05:38:09.978947 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:38:09.980740 kubelet[2406]: E1013 05:38:09.980706 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:09.996124 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 13 05:38:09.998111 kubelet[2406]: E1013 05:38:09.998081 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:10.024815 kubelet[2406]: E1013 05:38:10.024719 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" Oct 13 05:38:10.031880 kubelet[2406]: I1013 05:38:10.031821 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:10.031935 kubelet[2406]: I1013 05:38:10.031885 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:10.031935 kubelet[2406]: I1013 05:38:10.031910 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:10.032002 kubelet[2406]: I1013 05:38:10.031929 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:10.032002 kubelet[2406]: I1013 05:38:10.031958 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:38:10.032002 kubelet[2406]: I1013 05:38:10.031982 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:10.222201 kubelet[2406]: E1013 05:38:10.222152 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:10.223007 containerd[1626]: time="2025-10-13T05:38:10.222971153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:56c0103eb97f538feea14d587ebe9cab,Namespace:kube-system,Attempt:0,}" Oct 13 05:38:10.400338 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 13 05:38:10.403140 kubelet[2406]: E1013 05:38:10.403102 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:10.546154 kubelet[2406]: E1013 05:38:10.546104 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:10.546874 containerd[1626]: time="2025-10-13T05:38:10.546768655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 13 05:38:10.581246 kubelet[2406]: I1013 05:38:10.581211 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:38:10.581646 kubelet[2406]: E1013 05:38:10.581610 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Oct 13 05:38:10.598683 kubelet[2406]: E1013 05:38:10.598634 2406 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:38:10.673199 kubelet[2406]: E1013 05:38:10.672995 2406 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df65854834f7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:38:08.607178618 +0000 UTC m=+1.275267705,LastTimestamp:2025-10-13 05:38:08.607178618 +0000 UTC m=+1.275267705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:38:10.705351 kubelet[2406]: E1013 05:38:10.705281 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:10.705924 containerd[1626]: time="2025-10-13T05:38:10.705880177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 13 05:38:11.625866 kubelet[2406]: E1013 05:38:11.625794 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="3.2s" Oct 13 05:38:11.711708 kubelet[2406]: E1013 05:38:11.711666 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:38:12.060284 kubelet[2406]: E1013 05:38:12.060232 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:38:12.152974 kubelet[2406]: E1013 05:38:12.152927 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:38:12.182931 kubelet[2406]: I1013 05:38:12.182906 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:38:12.183199 kubelet[2406]: E1013 05:38:12.183158 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Oct 13 05:38:12.458334 kubelet[2406]: E1013 05:38:12.458215 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:38:14.414521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2548702886.mount: Deactivated successfully. Oct 13 05:38:14.652994 kubelet[2406]: E1013 05:38:14.652945 2406 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:38:14.826727 kubelet[2406]: E1013 05:38:14.826656 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="6.4s" Oct 13 05:38:14.926182 containerd[1626]: time="2025-10-13T05:38:14.926080238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:38:15.079095 containerd[1626]: time="2025-10-13T05:38:15.078906000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 13 05:38:15.125397 containerd[1626]: time="2025-10-13T05:38:15.125322685Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:38:15.200685 containerd[1626]: time="2025-10-13T05:38:15.200614104Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:38:15.258500 containerd[1626]: time="2025-10-13T05:38:15.258388599Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:38:15.297166 containerd[1626]: time="2025-10-13T05:38:15.297072511Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:38:15.332049 containerd[1626]: time="2025-10-13T05:38:15.331869508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:38:15.374971 containerd[1626]: time="2025-10-13T05:38:15.374886153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:38:15.376789 containerd[1626]: time="2025-10-13T05:38:15.376762428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 4.397072269s" Oct 13 05:38:15.377408 containerd[1626]: time="2025-10-13T05:38:15.377354163Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 4.82720907s" Oct 13 05:38:15.377733 containerd[1626]: time="2025-10-13T05:38:15.377644334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 4.492946033s" Oct 13 05:38:15.384477 kubelet[2406]: I1013 05:38:15.384440 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:38:15.384905 kubelet[2406]: E1013 05:38:15.384877 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Oct 13 05:38:16.033349 kubelet[2406]: E1013 05:38:16.033282 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:38:16.267480 containerd[1626]: time="2025-10-13T05:38:16.267429381Z" level=info msg="connecting to shim ee747dd057cdd7c2a83015db396ae7b122e732a2075d0e92ac14dee962eddd38" address="unix:///run/containerd/s/0d03d85764d50f37a07497c6a6a2968b70dbd4ecf38d48d36fb43411cfe58d2b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:16.298982 systemd[1]: Started cri-containerd-ee747dd057cdd7c2a83015db396ae7b122e732a2075d0e92ac14dee962eddd38.scope - libcontainer container ee747dd057cdd7c2a83015db396ae7b122e732a2075d0e92ac14dee962eddd38. Oct 13 05:38:16.394108 kubelet[2406]: E1013 05:38:16.394062 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:38:16.609186 containerd[1626]: time="2025-10-13T05:38:16.609056292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee747dd057cdd7c2a83015db396ae7b122e732a2075d0e92ac14dee962eddd38\"" Oct 13 05:38:16.610433 kubelet[2406]: E1013 05:38:16.610391 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:16.688999 containerd[1626]: time="2025-10-13T05:38:16.688927587Z" level=info msg="CreateContainer within sandbox \"ee747dd057cdd7c2a83015db396ae7b122e732a2075d0e92ac14dee962eddd38\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:38:16.740717 containerd[1626]: time="2025-10-13T05:38:16.740663498Z" level=info msg="connecting to shim 73c9b2d2e8b2c531bc64f9354ddb47621bfbe6496c84c98c220e1c0e6b3eda33" address="unix:///run/containerd/s/619a0c4cb4257edabc8ad18efcd764048638900454da114ebc2f7451835b1694" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:16.773033 systemd[1]: Started cri-containerd-73c9b2d2e8b2c531bc64f9354ddb47621bfbe6496c84c98c220e1c0e6b3eda33.scope - libcontainer container 73c9b2d2e8b2c531bc64f9354ddb47621bfbe6496c84c98c220e1c0e6b3eda33. Oct 13 05:38:16.807087 containerd[1626]: time="2025-10-13T05:38:16.807005439Z" level=info msg="connecting to shim 017e4b9a7cea7705b7c305dd7354718da0aeffd3ef881820ae44a8c170cbde01" address="unix:///run/containerd/s/7d62a28618e2ee5b026b7b486fd0cb628dfd43d63a9f848888dab21a474bdf2b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:16.843994 systemd[1]: Started cri-containerd-017e4b9a7cea7705b7c305dd7354718da0aeffd3ef881820ae44a8c170cbde01.scope - libcontainer container 017e4b9a7cea7705b7c305dd7354718da0aeffd3ef881820ae44a8c170cbde01. Oct 13 05:38:16.903988 containerd[1626]: time="2025-10-13T05:38:16.903748446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"73c9b2d2e8b2c531bc64f9354ddb47621bfbe6496c84c98c220e1c0e6b3eda33\"" Oct 13 05:38:16.905092 kubelet[2406]: E1013 05:38:16.905029 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:17.041308 kubelet[2406]: E1013 05:38:17.041249 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:38:17.068878 containerd[1626]: time="2025-10-13T05:38:17.068758177Z" level=info msg="CreateContainer within sandbox \"73c9b2d2e8b2c531bc64f9354ddb47621bfbe6496c84c98c220e1c0e6b3eda33\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:38:17.209780 containerd[1626]: time="2025-10-13T05:38:17.209615540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:56c0103eb97f538feea14d587ebe9cab,Namespace:kube-system,Attempt:0,} returns sandbox id \"017e4b9a7cea7705b7c305dd7354718da0aeffd3ef881820ae44a8c170cbde01\"" Oct 13 05:38:17.210615 kubelet[2406]: E1013 05:38:17.210574 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:17.281606 containerd[1626]: time="2025-10-13T05:38:17.281540354Z" level=info msg="Container 20df723036c443d7292abb76acc7e7441d8979d573be74bb89c8af5f45ded50b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:17.606598 containerd[1626]: time="2025-10-13T05:38:17.606543727Z" level=info msg="CreateContainer within sandbox \"017e4b9a7cea7705b7c305dd7354718da0aeffd3ef881820ae44a8c170cbde01\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:38:17.985479 containerd[1626]: time="2025-10-13T05:38:17.985348272Z" level=info msg="Container 2e6faf276fc2fd213754a096eaac1e6e5bc24b3641f7c8d6476d932a5ebb7792: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:17.986609 containerd[1626]: time="2025-10-13T05:38:17.986447948Z" level=info msg="CreateContainer within sandbox \"ee747dd057cdd7c2a83015db396ae7b122e732a2075d0e92ac14dee962eddd38\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"20df723036c443d7292abb76acc7e7441d8979d573be74bb89c8af5f45ded50b\"" Oct 13 05:38:17.987165 containerd[1626]: time="2025-10-13T05:38:17.987132506Z" level=info msg="StartContainer for \"20df723036c443d7292abb76acc7e7441d8979d573be74bb89c8af5f45ded50b\"" Oct 13 05:38:17.988192 containerd[1626]: time="2025-10-13T05:38:17.988163532Z" level=info msg="connecting to shim 20df723036c443d7292abb76acc7e7441d8979d573be74bb89c8af5f45ded50b" address="unix:///run/containerd/s/0d03d85764d50f37a07497c6a6a2968b70dbd4ecf38d48d36fb43411cfe58d2b" protocol=ttrpc version=3 Oct 13 05:38:18.017957 systemd[1]: Started cri-containerd-20df723036c443d7292abb76acc7e7441d8979d573be74bb89c8af5f45ded50b.scope - libcontainer container 20df723036c443d7292abb76acc7e7441d8979d573be74bb89c8af5f45ded50b. Oct 13 05:38:18.288558 containerd[1626]: time="2025-10-13T05:38:18.288507269Z" level=info msg="StartContainer for \"20df723036c443d7292abb76acc7e7441d8979d573be74bb89c8af5f45ded50b\" returns successfully" Oct 13 05:38:18.446864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2582186263.mount: Deactivated successfully. Oct 13 05:38:18.451008 containerd[1626]: time="2025-10-13T05:38:18.450953862Z" level=info msg="Container 0cd728c77edf932faac31d88905fed4242e37b9c630ee4d180eb09620b110331: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:18.466592 kubelet[2406]: E1013 05:38:18.466541 2406 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:38:18.661290 kubelet[2406]: E1013 05:38:18.661188 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:18.661402 kubelet[2406]: E1013 05:38:18.661297 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:18.705062 containerd[1626]: time="2025-10-13T05:38:18.705017611Z" level=info msg="CreateContainer within sandbox \"73c9b2d2e8b2c531bc64f9354ddb47621bfbe6496c84c98c220e1c0e6b3eda33\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2e6faf276fc2fd213754a096eaac1e6e5bc24b3641f7c8d6476d932a5ebb7792\"" Oct 13 05:38:18.705436 containerd[1626]: time="2025-10-13T05:38:18.705410395Z" level=info msg="StartContainer for \"2e6faf276fc2fd213754a096eaac1e6e5bc24b3641f7c8d6476d932a5ebb7792\"" Oct 13 05:38:18.706396 containerd[1626]: time="2025-10-13T05:38:18.706372970Z" level=info msg="connecting to shim 2e6faf276fc2fd213754a096eaac1e6e5bc24b3641f7c8d6476d932a5ebb7792" address="unix:///run/containerd/s/619a0c4cb4257edabc8ad18efcd764048638900454da114ebc2f7451835b1694" protocol=ttrpc version=3 Oct 13 05:38:18.730016 systemd[1]: Started cri-containerd-2e6faf276fc2fd213754a096eaac1e6e5bc24b3641f7c8d6476d932a5ebb7792.scope - libcontainer container 2e6faf276fc2fd213754a096eaac1e6e5bc24b3641f7c8d6476d932a5ebb7792. Oct 13 05:38:19.075712 kubelet[2406]: E1013 05:38:19.075653 2406 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 05:38:19.095655 containerd[1626]: time="2025-10-13T05:38:19.095593216Z" level=info msg="CreateContainer within sandbox \"017e4b9a7cea7705b7c305dd7354718da0aeffd3ef881820ae44a8c170cbde01\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0cd728c77edf932faac31d88905fed4242e37b9c630ee4d180eb09620b110331\"" Oct 13 05:38:19.096532 containerd[1626]: time="2025-10-13T05:38:19.096510103Z" level=info msg="StartContainer for \"0cd728c77edf932faac31d88905fed4242e37b9c630ee4d180eb09620b110331\"" Oct 13 05:38:19.097521 containerd[1626]: time="2025-10-13T05:38:19.097498355Z" level=info msg="connecting to shim 0cd728c77edf932faac31d88905fed4242e37b9c630ee4d180eb09620b110331" address="unix:///run/containerd/s/7d62a28618e2ee5b026b7b486fd0cb628dfd43d63a9f848888dab21a474bdf2b" protocol=ttrpc version=3 Oct 13 05:38:19.097929 containerd[1626]: time="2025-10-13T05:38:19.097898693Z" level=info msg="StartContainer for \"2e6faf276fc2fd213754a096eaac1e6e5bc24b3641f7c8d6476d932a5ebb7792\" returns successfully" Oct 13 05:38:19.142036 systemd[1]: Started cri-containerd-0cd728c77edf932faac31d88905fed4242e37b9c630ee4d180eb09620b110331.scope - libcontainer container 0cd728c77edf932faac31d88905fed4242e37b9c630ee4d180eb09620b110331. Oct 13 05:38:19.289782 containerd[1626]: time="2025-10-13T05:38:19.289738899Z" level=info msg="StartContainer for \"0cd728c77edf932faac31d88905fed4242e37b9c630ee4d180eb09620b110331\" returns successfully" Oct 13 05:38:19.665166 kubelet[2406]: E1013 05:38:19.665126 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:19.665557 kubelet[2406]: E1013 05:38:19.665267 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:19.667014 kubelet[2406]: E1013 05:38:19.666984 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:19.667180 kubelet[2406]: E1013 05:38:19.667085 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:19.667298 kubelet[2406]: E1013 05:38:19.667248 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:19.667298 kubelet[2406]: E1013 05:38:19.667258 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:20.050941 update_engine[1607]: I20251013 05:38:20.050783 1607 update_attempter.cc:509] Updating boot flags... Oct 13 05:38:20.671646 kubelet[2406]: E1013 05:38:20.671604 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:20.672122 kubelet[2406]: E1013 05:38:20.671722 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:20.672122 kubelet[2406]: E1013 05:38:20.671932 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:20.672122 kubelet[2406]: E1013 05:38:20.672065 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:20.672821 kubelet[2406]: E1013 05:38:20.672795 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:20.672953 kubelet[2406]: E1013 05:38:20.672918 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:21.243482 kubelet[2406]: E1013 05:38:21.243372 2406 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186df65854834f7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:38:08.607178618 +0000 UTC m=+1.275267705,LastTimestamp:2025-10-13 05:38:08.607178618 +0000 UTC m=+1.275267705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:38:21.302912 kubelet[2406]: E1013 05:38:21.302823 2406 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 05:38:21.610537 kubelet[2406]: E1013 05:38:21.610481 2406 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Oct 13 05:38:21.673856 kubelet[2406]: E1013 05:38:21.673794 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:21.674380 kubelet[2406]: E1013 05:38:21.673950 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:21.674380 kubelet[2406]: E1013 05:38:21.674157 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:21.674380 kubelet[2406]: E1013 05:38:21.674288 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:21.786760 kubelet[2406]: I1013 05:38:21.786710 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:38:21.815465 kubelet[2406]: I1013 05:38:21.815408 2406 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:38:21.815465 kubelet[2406]: E1013 05:38:21.815449 2406 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 13 05:38:21.879164 kubelet[2406]: E1013 05:38:21.878980 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:21.979449 kubelet[2406]: E1013 05:38:21.979380 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:22.080011 kubelet[2406]: E1013 05:38:22.079950 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:22.180495 kubelet[2406]: E1013 05:38:22.180056 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:22.280563 kubelet[2406]: E1013 05:38:22.280499 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:22.381595 kubelet[2406]: E1013 05:38:22.381554 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:22.482577 kubelet[2406]: E1013 05:38:22.482432 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:22.583340 kubelet[2406]: E1013 05:38:22.583269 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:22.684250 kubelet[2406]: E1013 05:38:22.684191 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:22.784784 kubelet[2406]: E1013 05:38:22.784746 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:22.885459 kubelet[2406]: E1013 05:38:22.885418 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:22.985969 kubelet[2406]: E1013 05:38:22.985903 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:23.086899 kubelet[2406]: E1013 05:38:23.086737 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:23.187879 kubelet[2406]: E1013 05:38:23.187816 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:23.271348 kubelet[2406]: E1013 05:38:23.271295 2406 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:38:23.271511 kubelet[2406]: E1013 05:38:23.271460 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:23.288434 kubelet[2406]: E1013 05:38:23.288362 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:23.389116 kubelet[2406]: E1013 05:38:23.388981 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:23.490024 kubelet[2406]: E1013 05:38:23.489965 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:23.590982 kubelet[2406]: E1013 05:38:23.590916 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:23.691231 kubelet[2406]: E1013 05:38:23.691071 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:23.791906 kubelet[2406]: E1013 05:38:23.791823 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:23.892772 kubelet[2406]: E1013 05:38:23.892687 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:23.993161 kubelet[2406]: E1013 05:38:23.993029 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:24.094032 kubelet[2406]: E1013 05:38:24.093983 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:24.194606 kubelet[2406]: E1013 05:38:24.194481 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:24.295510 kubelet[2406]: E1013 05:38:24.295445 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:24.396381 kubelet[2406]: E1013 05:38:24.396323 2406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:24.521912 kubelet[2406]: I1013 05:38:24.521818 2406 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:38:24.607533 kubelet[2406]: I1013 05:38:24.607412 2406 apiserver.go:52] "Watching apiserver" Oct 13 05:38:24.623051 kubelet[2406]: I1013 05:38:24.622995 2406 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:38:24.989628 kubelet[2406]: I1013 05:38:24.989491 2406 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:24.990126 kubelet[2406]: E1013 05:38:24.990032 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:25.242971 kubelet[2406]: I1013 05:38:25.242813 2406 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:38:25.243160 kubelet[2406]: E1013 05:38:25.243137 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:25.326817 kubelet[2406]: E1013 05:38:25.326770 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:26.410728 kubelet[2406]: E1013 05:38:26.410663 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:28.281575 systemd[1]: Reload requested from client PID 2718 ('systemctl') (unit session-7.scope)... Oct 13 05:38:28.281607 systemd[1]: Reloading... Oct 13 05:38:28.421237 zram_generator::config[2759]: No configuration found. Oct 13 05:38:28.758939 systemd[1]: Reloading finished in 476 ms. Oct 13 05:38:28.795877 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:38:28.824816 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:38:28.825300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:38:28.825380 systemd[1]: kubelet.service: Consumed 1.462s CPU time, 127.8M memory peak. Oct 13 05:38:28.832974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:38:29.169611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:38:29.192375 (kubelet)[2807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:38:29.272333 kubelet[2807]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:38:29.272333 kubelet[2807]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:38:29.272333 kubelet[2807]: I1013 05:38:29.270528 2807 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:38:29.296227 kubelet[2807]: I1013 05:38:29.295158 2807 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:38:29.296227 kubelet[2807]: I1013 05:38:29.295203 2807 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:38:29.296227 kubelet[2807]: I1013 05:38:29.295254 2807 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:38:29.296227 kubelet[2807]: I1013 05:38:29.295268 2807 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:38:29.296227 kubelet[2807]: I1013 05:38:29.295856 2807 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:38:29.298955 kubelet[2807]: I1013 05:38:29.298438 2807 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:38:29.303908 kubelet[2807]: I1013 05:38:29.301454 2807 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:38:29.317199 kubelet[2807]: I1013 05:38:29.311231 2807 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:38:29.317199 kubelet[2807]: I1013 05:38:29.316797 2807 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:38:29.317199 kubelet[2807]: I1013 05:38:29.317044 2807 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:38:29.317457 kubelet[2807]: I1013 05:38:29.317080 2807 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:38:29.317457 kubelet[2807]: I1013 05:38:29.317459 2807 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:38:29.317607 kubelet[2807]: I1013 05:38:29.317470 2807 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:38:29.317607 kubelet[2807]: I1013 05:38:29.317497 2807 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:38:29.318487 kubelet[2807]: I1013 05:38:29.318439 2807 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:38:29.318664 kubelet[2807]: I1013 05:38:29.318638 2807 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:38:29.318702 kubelet[2807]: I1013 05:38:29.318679 2807 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:38:29.318739 kubelet[2807]: I1013 05:38:29.318713 2807 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:38:29.318769 kubelet[2807]: I1013 05:38:29.318745 2807 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:38:29.321034 kubelet[2807]: I1013 05:38:29.321004 2807 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:38:29.321628 kubelet[2807]: I1013 05:38:29.321595 2807 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:38:29.321664 kubelet[2807]: I1013 05:38:29.321633 2807 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:38:29.332158 kubelet[2807]: I1013 05:38:29.332098 2807 server.go:1262] "Started kubelet" Oct 13 05:38:29.334139 kubelet[2807]: I1013 05:38:29.334033 2807 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:38:29.334139 kubelet[2807]: I1013 05:38:29.334135 2807 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:38:29.334599 kubelet[2807]: I1013 05:38:29.334563 2807 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:38:29.336618 kubelet[2807]: I1013 05:38:29.336569 2807 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:38:29.346504 kubelet[2807]: I1013 05:38:29.346464 2807 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:38:29.348073 kubelet[2807]: I1013 05:38:29.348029 2807 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:38:29.348333 kubelet[2807]: I1013 05:38:29.348298 2807 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:38:29.348579 kubelet[2807]: E1013 05:38:29.348547 2807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:38:29.348911 kubelet[2807]: I1013 05:38:29.348885 2807 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:38:29.351353 kubelet[2807]: I1013 05:38:29.350359 2807 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:38:29.357481 kubelet[2807]: I1013 05:38:29.357413 2807 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:38:29.361612 kubelet[2807]: E1013 05:38:29.361495 2807 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:38:29.361957 kubelet[2807]: I1013 05:38:29.361804 2807 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:38:29.361957 kubelet[2807]: I1013 05:38:29.361845 2807 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:38:29.361957 kubelet[2807]: I1013 05:38:29.361935 2807 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:38:29.406132 kubelet[2807]: I1013 05:38:29.406041 2807 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:38:29.413615 kubelet[2807]: I1013 05:38:29.413565 2807 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:38:29.413747 kubelet[2807]: I1013 05:38:29.413638 2807 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:38:29.413747 kubelet[2807]: I1013 05:38:29.413691 2807 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:38:29.413856 kubelet[2807]: E1013 05:38:29.413791 2807 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:38:29.464639 kubelet[2807]: I1013 05:38:29.463759 2807 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:38:29.464639 kubelet[2807]: I1013 05:38:29.463820 2807 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:38:29.464639 kubelet[2807]: I1013 05:38:29.463887 2807 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:38:29.464639 kubelet[2807]: I1013 05:38:29.464140 2807 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:38:29.464639 kubelet[2807]: I1013 05:38:29.464151 2807 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:38:29.464639 kubelet[2807]: I1013 05:38:29.464171 2807 policy_none.go:49] "None policy: Start" Oct 13 05:38:29.464639 kubelet[2807]: I1013 05:38:29.464181 2807 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:38:29.464639 kubelet[2807]: I1013 05:38:29.464224 2807 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:38:29.464639 kubelet[2807]: I1013 05:38:29.464417 2807 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 13 05:38:29.464639 kubelet[2807]: I1013 05:38:29.464427 2807 policy_none.go:47] "Start" Oct 13 05:38:29.473562 kubelet[2807]: E1013 05:38:29.473482 2807 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:38:29.473897 kubelet[2807]: I1013 05:38:29.473819 2807 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:38:29.473950 kubelet[2807]: I1013 05:38:29.473877 2807 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:38:29.474676 kubelet[2807]: I1013 05:38:29.474638 2807 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:38:29.477343 kubelet[2807]: E1013 05:38:29.477227 2807 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:38:29.518367 kubelet[2807]: I1013 05:38:29.515137 2807 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:38:29.518367 kubelet[2807]: I1013 05:38:29.515586 2807 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:38:29.519058 kubelet[2807]: I1013 05:38:29.519040 2807 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:29.552021 kubelet[2807]: I1013 05:38:29.550900 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:29.552021 kubelet[2807]: I1013 05:38:29.551461 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:29.552801 kubelet[2807]: I1013 05:38:29.552276 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:29.552801 kubelet[2807]: I1013 05:38:29.552323 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56c0103eb97f538feea14d587ebe9cab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"56c0103eb97f538feea14d587ebe9cab\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:38:29.552801 kubelet[2807]: I1013 05:38:29.552344 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56c0103eb97f538feea14d587ebe9cab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"56c0103eb97f538feea14d587ebe9cab\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:38:29.552801 kubelet[2807]: I1013 05:38:29.552361 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56c0103eb97f538feea14d587ebe9cab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"56c0103eb97f538feea14d587ebe9cab\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:38:29.552801 kubelet[2807]: I1013 05:38:29.552379 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:29.552973 kubelet[2807]: I1013 05:38:29.552396 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:29.552973 kubelet[2807]: I1013 05:38:29.552415 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:38:29.594487 kubelet[2807]: I1013 05:38:29.591710 2807 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:38:29.774998 kubelet[2807]: E1013 05:38:29.774927 2807 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:38:29.775302 kubelet[2807]: E1013 05:38:29.775265 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:29.855341 kubelet[2807]: E1013 05:38:29.854252 2807 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 05:38:29.855341 kubelet[2807]: E1013 05:38:29.854557 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:29.855341 kubelet[2807]: E1013 05:38:29.854868 2807 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:38:29.856863 kubelet[2807]: E1013 05:38:29.856538 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:29.858071 kubelet[2807]: I1013 05:38:29.858048 2807 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 05:38:29.858165 kubelet[2807]: I1013 05:38:29.858107 2807 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:38:30.319402 kubelet[2807]: I1013 05:38:30.319136 2807 apiserver.go:52] "Watching apiserver" Oct 13 05:38:30.350001 kubelet[2807]: I1013 05:38:30.349817 2807 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:38:30.448311 kubelet[2807]: I1013 05:38:30.445773 2807 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:38:30.448311 kubelet[2807]: I1013 05:38:30.446143 2807 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:38:30.448311 kubelet[2807]: E1013 05:38:30.447469 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:30.805280 kubelet[2807]: E1013 05:38:30.805116 2807 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 05:38:30.805714 kubelet[2807]: E1013 05:38:30.805430 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:30.806470 kubelet[2807]: E1013 05:38:30.806210 2807 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:38:30.806470 kubelet[2807]: E1013 05:38:30.806392 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:30.985653 kubelet[2807]: I1013 05:38:30.985497 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.985468683 podStartE2EDuration="5.985468683s" podCreationTimestamp="2025-10-13 05:38:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:38:30.805538922 +0000 UTC m=+1.604475158" watchObservedRunningTime="2025-10-13 05:38:30.985468683 +0000 UTC m=+1.784404919" Oct 13 05:38:31.013988 sudo[2850]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 13 05:38:31.014393 sudo[2850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 13 05:38:31.447482 kubelet[2807]: E1013 05:38:31.447404 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:31.448459 kubelet[2807]: E1013 05:38:31.447744 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:31.460729 sudo[2850]: pam_unix(sudo:session): session closed for user root Oct 13 05:38:32.269030 kubelet[2807]: E1013 05:38:32.268990 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:32.449046 kubelet[2807]: E1013 05:38:32.448948 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:33.294507 kubelet[2807]: I1013 05:38:33.294453 2807 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:38:33.295039 containerd[1626]: time="2025-10-13T05:38:33.294885620Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:38:33.295515 kubelet[2807]: I1013 05:38:33.295242 2807 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:38:34.658126 systemd[1]: Created slice kubepods-besteffort-pod68f8d6fd_1dd7_4ffa_be70_07ffd63730a3.slice - libcontainer container kubepods-besteffort-pod68f8d6fd_1dd7_4ffa_be70_07ffd63730a3.slice. Oct 13 05:38:34.688305 kubelet[2807]: I1013 05:38:34.688249 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68f8d6fd-1dd7-4ffa-be70-07ffd63730a3-xtables-lock\") pod \"kube-proxy-b2ffl\" (UID: \"68f8d6fd-1dd7-4ffa-be70-07ffd63730a3\") " pod="kube-system/kube-proxy-b2ffl" Oct 13 05:38:34.688305 kubelet[2807]: I1013 05:38:34.688290 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68f8d6fd-1dd7-4ffa-be70-07ffd63730a3-lib-modules\") pod \"kube-proxy-b2ffl\" (UID: \"68f8d6fd-1dd7-4ffa-be70-07ffd63730a3\") " pod="kube-system/kube-proxy-b2ffl" Oct 13 05:38:34.688305 kubelet[2807]: I1013 05:38:34.688315 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68f8d6fd-1dd7-4ffa-be70-07ffd63730a3-kube-proxy\") pod \"kube-proxy-b2ffl\" (UID: \"68f8d6fd-1dd7-4ffa-be70-07ffd63730a3\") " pod="kube-system/kube-proxy-b2ffl" Oct 13 05:38:34.688795 kubelet[2807]: I1013 05:38:34.688336 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6dd9\" (UniqueName: \"kubernetes.io/projected/68f8d6fd-1dd7-4ffa-be70-07ffd63730a3-kube-api-access-d6dd9\") pod \"kube-proxy-b2ffl\" (UID: \"68f8d6fd-1dd7-4ffa-be70-07ffd63730a3\") " pod="kube-system/kube-proxy-b2ffl" Oct 13 05:38:34.766608 systemd[1]: Created slice kubepods-burstable-pode078b99f_9980_42be_8af6_381000d811cb.slice - libcontainer container kubepods-burstable-pode078b99f_9980_42be_8af6_381000d811cb.slice. Oct 13 05:38:34.789437 kubelet[2807]: I1013 05:38:34.789391 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-lib-modules\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789437 kubelet[2807]: I1013 05:38:34.789435 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-host-proc-sys-net\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789637 kubelet[2807]: I1013 05:38:34.789453 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-host-proc-sys-kernel\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789637 kubelet[2807]: I1013 05:38:34.789478 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-cilium-run\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789637 kubelet[2807]: I1013 05:38:34.789496 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-cni-path\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789637 kubelet[2807]: I1013 05:38:34.789513 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-xtables-lock\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789637 kubelet[2807]: I1013 05:38:34.789549 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e078b99f-9980-42be-8af6-381000d811cb-cilium-config-path\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789637 kubelet[2807]: I1013 05:38:34.789568 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e078b99f-9980-42be-8af6-381000d811cb-hubble-tls\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789826 kubelet[2807]: I1013 05:38:34.789621 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e078b99f-9980-42be-8af6-381000d811cb-clustermesh-secrets\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789826 kubelet[2807]: I1013 05:38:34.789635 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9c4q\" (UniqueName: \"kubernetes.io/projected/e078b99f-9980-42be-8af6-381000d811cb-kube-api-access-g9c4q\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789826 kubelet[2807]: I1013 05:38:34.789657 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-bpf-maps\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789826 kubelet[2807]: I1013 05:38:34.789669 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-hostproc\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789826 kubelet[2807]: I1013 05:38:34.789680 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-cilium-cgroup\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.789826 kubelet[2807]: I1013 05:38:34.789693 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-etc-cni-netd\") pod \"cilium-gc9bc\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " pod="kube-system/cilium-gc9bc" Oct 13 05:38:34.910479 sudo[1837]: pam_unix(sudo:session): session closed for user root Oct 13 05:38:34.912327 sshd[1836]: Connection closed by 10.0.0.1 port 59882 Oct 13 05:38:34.919810 systemd[1]: Created slice kubepods-besteffort-pod602b9dd7_dee3_4739_9811_81ecd15eb6d7.slice - libcontainer container kubepods-besteffort-pod602b9dd7_dee3_4739_9811_81ecd15eb6d7.slice. Oct 13 05:38:34.934094 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:34.939357 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:59882.service: Deactivated successfully. Oct 13 05:38:34.941496 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:38:34.941706 systemd[1]: session-7.scope: Consumed 7.614s CPU time, 260.1M memory peak. Oct 13 05:38:34.942912 systemd-logind[1604]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:38:34.944596 systemd-logind[1604]: Removed session 7. Oct 13 05:38:34.991382 kubelet[2807]: I1013 05:38:34.991270 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/602b9dd7-dee3-4739-9811-81ecd15eb6d7-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-fbmj5\" (UID: \"602b9dd7-dee3-4739-9811-81ecd15eb6d7\") " pod="kube-system/cilium-operator-6f9c7c5859-fbmj5" Oct 13 05:38:34.991382 kubelet[2807]: I1013 05:38:34.991351 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbmcx\" (UniqueName: \"kubernetes.io/projected/602b9dd7-dee3-4739-9811-81ecd15eb6d7-kube-api-access-pbmcx\") pod \"cilium-operator-6f9c7c5859-fbmj5\" (UID: \"602b9dd7-dee3-4739-9811-81ecd15eb6d7\") " pod="kube-system/cilium-operator-6f9c7c5859-fbmj5" Oct 13 05:38:35.391095 kubelet[2807]: E1013 05:38:35.391037 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:35.392390 containerd[1626]: time="2025-10-13T05:38:35.392344481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fbmj5,Uid:602b9dd7-dee3-4739-9811-81ecd15eb6d7,Namespace:kube-system,Attempt:0,}" Oct 13 05:38:35.518576 kubelet[2807]: E1013 05:38:35.518523 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:35.519262 containerd[1626]: time="2025-10-13T05:38:35.519213806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2ffl,Uid:68f8d6fd-1dd7-4ffa-be70-07ffd63730a3,Namespace:kube-system,Attempt:0,}" Oct 13 05:38:35.583658 kubelet[2807]: E1013 05:38:35.583613 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:35.584304 containerd[1626]: time="2025-10-13T05:38:35.584254215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gc9bc,Uid:e078b99f-9980-42be-8af6-381000d811cb,Namespace:kube-system,Attempt:0,}" Oct 13 05:38:36.642521 containerd[1626]: time="2025-10-13T05:38:36.642464831Z" level=info msg="connecting to shim 8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84" address="unix:///run/containerd/s/aeefe8b3d129d4405042f62e91c99c3624c3610732d3433fe05758f46f4c0942" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:36.728020 systemd[1]: Started cri-containerd-8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84.scope - libcontainer container 8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84. Oct 13 05:38:36.890858 containerd[1626]: time="2025-10-13T05:38:36.890768497Z" level=info msg="connecting to shim ea7a12cd999e078b8f04d07ac8b43276fbb5dad023b15875e55059f0ab005073" address="unix:///run/containerd/s/b48b188970e56445cb1bda13244aeafe08d3e94433a9d4dcd3ea3dff0b923ac6" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:36.894430 containerd[1626]: time="2025-10-13T05:38:36.893968146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fbmj5,Uid:602b9dd7-dee3-4739-9811-81ecd15eb6d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\"" Oct 13 05:38:36.896111 kubelet[2807]: E1013 05:38:36.895082 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:36.906371 containerd[1626]: time="2025-10-13T05:38:36.906257017Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 13 05:38:36.936420 systemd[1]: Started cri-containerd-ea7a12cd999e078b8f04d07ac8b43276fbb5dad023b15875e55059f0ab005073.scope - libcontainer container ea7a12cd999e078b8f04d07ac8b43276fbb5dad023b15875e55059f0ab005073. Oct 13 05:38:36.990412 containerd[1626]: time="2025-10-13T05:38:36.990358311Z" level=info msg="connecting to shim 1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d" address="unix:///run/containerd/s/858c8e2078f0b3c63e22769554e9d535aba51f9ac7553adcfb3841b6516cea0f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:38:37.034085 systemd[1]: Started cri-containerd-1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d.scope - libcontainer container 1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d. Oct 13 05:38:37.119780 containerd[1626]: time="2025-10-13T05:38:37.119714311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gc9bc,Uid:e078b99f-9980-42be-8af6-381000d811cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\"" Oct 13 05:38:37.120726 kubelet[2807]: E1013 05:38:37.120526 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:37.186312 containerd[1626]: time="2025-10-13T05:38:37.185879893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2ffl,Uid:68f8d6fd-1dd7-4ffa-be70-07ffd63730a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea7a12cd999e078b8f04d07ac8b43276fbb5dad023b15875e55059f0ab005073\"" Oct 13 05:38:37.186681 kubelet[2807]: E1013 05:38:37.186597 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:37.254906 containerd[1626]: time="2025-10-13T05:38:37.254817419Z" level=info msg="CreateContainer within sandbox \"ea7a12cd999e078b8f04d07ac8b43276fbb5dad023b15875e55059f0ab005073\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:38:37.469972 containerd[1626]: time="2025-10-13T05:38:37.468704864Z" level=info msg="Container 82239eb9e8696899b97b3e2522690708ce93e0e27ee766b495fd2162a696338d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:37.614290 containerd[1626]: time="2025-10-13T05:38:37.614229555Z" level=info msg="CreateContainer within sandbox \"ea7a12cd999e078b8f04d07ac8b43276fbb5dad023b15875e55059f0ab005073\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82239eb9e8696899b97b3e2522690708ce93e0e27ee766b495fd2162a696338d\"" Oct 13 05:38:37.615075 containerd[1626]: time="2025-10-13T05:38:37.615007548Z" level=info msg="StartContainer for \"82239eb9e8696899b97b3e2522690708ce93e0e27ee766b495fd2162a696338d\"" Oct 13 05:38:37.617649 containerd[1626]: time="2025-10-13T05:38:37.617588243Z" level=info msg="connecting to shim 82239eb9e8696899b97b3e2522690708ce93e0e27ee766b495fd2162a696338d" address="unix:///run/containerd/s/b48b188970e56445cb1bda13244aeafe08d3e94433a9d4dcd3ea3dff0b923ac6" protocol=ttrpc version=3 Oct 13 05:38:37.657095 systemd[1]: Started cri-containerd-82239eb9e8696899b97b3e2522690708ce93e0e27ee766b495fd2162a696338d.scope - libcontainer container 82239eb9e8696899b97b3e2522690708ce93e0e27ee766b495fd2162a696338d. Oct 13 05:38:37.744991 containerd[1626]: time="2025-10-13T05:38:37.744658500Z" level=info msg="StartContainer for \"82239eb9e8696899b97b3e2522690708ce93e0e27ee766b495fd2162a696338d\" returns successfully" Oct 13 05:38:38.453253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231879293.mount: Deactivated successfully. Oct 13 05:38:38.468460 kubelet[2807]: E1013 05:38:38.468409 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:38.482382 kubelet[2807]: I1013 05:38:38.482259 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b2ffl" podStartSLOduration=4.48223819 podStartE2EDuration="4.48223819s" podCreationTimestamp="2025-10-13 05:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:38:38.482122844 +0000 UTC m=+9.281059110" watchObservedRunningTime="2025-10-13 05:38:38.48223819 +0000 UTC m=+9.281174426" Oct 13 05:38:39.004895 kubelet[2807]: E1013 05:38:39.004605 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:39.269404 containerd[1626]: time="2025-10-13T05:38:39.269239295Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:39.271115 containerd[1626]: time="2025-10-13T05:38:39.271015634Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Oct 13 05:38:39.272445 containerd[1626]: time="2025-10-13T05:38:39.272395810Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:39.273926 containerd[1626]: time="2025-10-13T05:38:39.273884840Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.367578229s" Oct 13 05:38:39.273926 containerd[1626]: time="2025-10-13T05:38:39.273919966Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 13 05:38:39.275626 containerd[1626]: time="2025-10-13T05:38:39.275594845Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 13 05:38:39.282032 containerd[1626]: time="2025-10-13T05:38:39.281627909Z" level=info msg="CreateContainer within sandbox \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 13 05:38:39.294472 containerd[1626]: time="2025-10-13T05:38:39.294392320Z" level=info msg="Container dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:39.299237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214926370.mount: Deactivated successfully. Oct 13 05:38:39.306265 containerd[1626]: time="2025-10-13T05:38:39.302970170Z" level=info msg="CreateContainer within sandbox \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\"" Oct 13 05:38:39.306265 containerd[1626]: time="2025-10-13T05:38:39.303596447Z" level=info msg="StartContainer for \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\"" Oct 13 05:38:39.306265 containerd[1626]: time="2025-10-13T05:38:39.304716463Z" level=info msg="connecting to shim dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb" address="unix:///run/containerd/s/aeefe8b3d129d4405042f62e91c99c3624c3610732d3433fe05758f46f4c0942" protocol=ttrpc version=3 Oct 13 05:38:39.329283 systemd[1]: Started cri-containerd-dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb.scope - libcontainer container dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb. Oct 13 05:38:39.379824 containerd[1626]: time="2025-10-13T05:38:39.379758767Z" level=info msg="StartContainer for \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\" returns successfully" Oct 13 05:38:39.476873 kubelet[2807]: E1013 05:38:39.475895 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:39.476873 kubelet[2807]: E1013 05:38:39.476542 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:39.477457 kubelet[2807]: E1013 05:38:39.476914 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:39.807684 kubelet[2807]: E1013 05:38:39.807641 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:39.824875 kubelet[2807]: I1013 05:38:39.824183 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-fbmj5" podStartSLOduration=3.45454143 podStartE2EDuration="5.824162261s" podCreationTimestamp="2025-10-13 05:38:34 +0000 UTC" firstStartedPulling="2025-10-13 05:38:36.905254451 +0000 UTC m=+7.704190687" lastFinishedPulling="2025-10-13 05:38:39.274875282 +0000 UTC m=+10.073811518" observedRunningTime="2025-10-13 05:38:39.540662279 +0000 UTC m=+10.339598515" watchObservedRunningTime="2025-10-13 05:38:39.824162261 +0000 UTC m=+10.623098498" Oct 13 05:38:40.480608 kubelet[2807]: E1013 05:38:40.480566 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:40.481278 kubelet[2807]: E1013 05:38:40.481126 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:50.285388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241477283.mount: Deactivated successfully. Oct 13 05:38:53.938969 containerd[1626]: time="2025-10-13T05:38:53.936852157Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:53.944779 containerd[1626]: time="2025-10-13T05:38:53.944617755Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Oct 13 05:38:53.955882 containerd[1626]: time="2025-10-13T05:38:53.955250645Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:38:54.007741 containerd[1626]: time="2025-10-13T05:38:54.007522696Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.731890752s" Oct 13 05:38:54.007741 containerd[1626]: time="2025-10-13T05:38:54.007594080Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 13 05:38:54.031553 containerd[1626]: time="2025-10-13T05:38:54.031477399Z" level=info msg="CreateContainer within sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 13 05:38:54.070316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2467299905.mount: Deactivated successfully. Oct 13 05:38:54.079393 containerd[1626]: time="2025-10-13T05:38:54.079328788Z" level=info msg="Container 3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:54.097946 containerd[1626]: time="2025-10-13T05:38:54.096330120Z" level=info msg="CreateContainer within sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\"" Oct 13 05:38:54.097946 containerd[1626]: time="2025-10-13T05:38:54.097420516Z" level=info msg="StartContainer for \"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\"" Oct 13 05:38:54.098807 containerd[1626]: time="2025-10-13T05:38:54.098745534Z" level=info msg="connecting to shim 3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4" address="unix:///run/containerd/s/858c8e2078f0b3c63e22769554e9d535aba51f9ac7553adcfb3841b6516cea0f" protocol=ttrpc version=3 Oct 13 05:38:54.146090 systemd[1]: Started cri-containerd-3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4.scope - libcontainer container 3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4. Oct 13 05:38:54.273374 containerd[1626]: time="2025-10-13T05:38:54.273289105Z" level=info msg="StartContainer for \"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\" returns successfully" Oct 13 05:38:54.289509 systemd[1]: cri-containerd-3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4.scope: Deactivated successfully. Oct 13 05:38:54.291772 containerd[1626]: time="2025-10-13T05:38:54.291729008Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\" id:\"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\" pid:3294 exited_at:{seconds:1760333934 nanos:291129333}" Oct 13 05:38:54.292172 containerd[1626]: time="2025-10-13T05:38:54.292135371Z" level=info msg="received exit event container_id:\"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\" id:\"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\" pid:3294 exited_at:{seconds:1760333934 nanos:291129333}" Oct 13 05:38:54.555595 kubelet[2807]: E1013 05:38:54.554888 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:55.069494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4-rootfs.mount: Deactivated successfully. Oct 13 05:38:55.561862 kubelet[2807]: E1013 05:38:55.561787 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:55.667880 containerd[1626]: time="2025-10-13T05:38:55.666925178Z" level=info msg="CreateContainer within sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 13 05:38:55.846327 containerd[1626]: time="2025-10-13T05:38:55.846190091Z" level=info msg="Container 030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:55.850530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2217516530.mount: Deactivated successfully. Oct 13 05:38:55.856852 containerd[1626]: time="2025-10-13T05:38:55.856796047Z" level=info msg="CreateContainer within sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\"" Oct 13 05:38:55.857458 containerd[1626]: time="2025-10-13T05:38:55.857418334Z" level=info msg="StartContainer for \"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\"" Oct 13 05:38:55.858823 containerd[1626]: time="2025-10-13T05:38:55.858445603Z" level=info msg="connecting to shim 030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f" address="unix:///run/containerd/s/858c8e2078f0b3c63e22769554e9d535aba51f9ac7553adcfb3841b6516cea0f" protocol=ttrpc version=3 Oct 13 05:38:55.887538 systemd[1]: Started cri-containerd-030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f.scope - libcontainer container 030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f. Oct 13 05:38:55.935552 containerd[1626]: time="2025-10-13T05:38:55.935494847Z" level=info msg="StartContainer for \"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\" returns successfully" Oct 13 05:38:55.947924 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:38:55.948266 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:38:55.948369 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:38:55.953192 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:38:55.954872 containerd[1626]: time="2025-10-13T05:38:55.954809718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\" id:\"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\" pid:3342 exited_at:{seconds:1760333935 nanos:954421209}" Oct 13 05:38:55.955013 containerd[1626]: time="2025-10-13T05:38:55.954949080Z" level=info msg="received exit event container_id:\"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\" id:\"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\" pid:3342 exited_at:{seconds:1760333935 nanos:954421209}" Oct 13 05:38:55.957236 systemd[1]: cri-containerd-030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f.scope: Deactivated successfully. Oct 13 05:38:55.983940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f-rootfs.mount: Deactivated successfully. Oct 13 05:38:56.005963 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:38:56.568436 kubelet[2807]: E1013 05:38:56.568238 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:56.586659 containerd[1626]: time="2025-10-13T05:38:56.585663016Z" level=info msg="CreateContainer within sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 13 05:38:56.607759 containerd[1626]: time="2025-10-13T05:38:56.606331115Z" level=info msg="Container 19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:56.627715 containerd[1626]: time="2025-10-13T05:38:56.627664552Z" level=info msg="CreateContainer within sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\"" Oct 13 05:38:56.628386 containerd[1626]: time="2025-10-13T05:38:56.628347435Z" level=info msg="StartContainer for \"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\"" Oct 13 05:38:56.630079 containerd[1626]: time="2025-10-13T05:38:56.630048197Z" level=info msg="connecting to shim 19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5" address="unix:///run/containerd/s/858c8e2078f0b3c63e22769554e9d535aba51f9ac7553adcfb3841b6516cea0f" protocol=ttrpc version=3 Oct 13 05:38:56.660136 systemd[1]: Started cri-containerd-19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5.scope - libcontainer container 19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5. Oct 13 05:38:56.725635 systemd[1]: cri-containerd-19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5.scope: Deactivated successfully. Oct 13 05:38:56.726806 containerd[1626]: time="2025-10-13T05:38:56.726732952Z" level=info msg="received exit event container_id:\"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\" id:\"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\" pid:3388 exited_at:{seconds:1760333936 nanos:726083193}" Oct 13 05:38:56.726806 containerd[1626]: time="2025-10-13T05:38:56.726782956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\" id:\"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\" pid:3388 exited_at:{seconds:1760333936 nanos:726083193}" Oct 13 05:38:56.733539 containerd[1626]: time="2025-10-13T05:38:56.733487559Z" level=info msg="StartContainer for \"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\" returns successfully" Oct 13 05:38:56.765254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5-rootfs.mount: Deactivated successfully. Oct 13 05:38:57.577693 kubelet[2807]: E1013 05:38:57.576239 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:57.826765 containerd[1626]: time="2025-10-13T05:38:57.826685367Z" level=info msg="CreateContainer within sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 13 05:38:58.020228 containerd[1626]: time="2025-10-13T05:38:58.020155123Z" level=info msg="Container ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:58.034002 containerd[1626]: time="2025-10-13T05:38:58.033918400Z" level=info msg="CreateContainer within sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\"" Oct 13 05:38:58.036864 containerd[1626]: time="2025-10-13T05:38:58.034957340Z" level=info msg="StartContainer for \"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\"" Oct 13 05:38:58.037725 containerd[1626]: time="2025-10-13T05:38:58.037494682Z" level=info msg="connecting to shim ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518" address="unix:///run/containerd/s/858c8e2078f0b3c63e22769554e9d535aba51f9ac7553adcfb3841b6516cea0f" protocol=ttrpc version=3 Oct 13 05:38:58.086131 systemd[1]: Started cri-containerd-ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518.scope - libcontainer container ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518. Oct 13 05:38:58.170774 systemd[1]: cri-containerd-ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518.scope: Deactivated successfully. Oct 13 05:38:58.174296 containerd[1626]: time="2025-10-13T05:38:58.173129960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\" id:\"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\" pid:3427 exited_at:{seconds:1760333938 nanos:172827573}" Oct 13 05:38:58.219221 containerd[1626]: time="2025-10-13T05:38:58.219161681Z" level=info msg="received exit event container_id:\"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\" id:\"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\" pid:3427 exited_at:{seconds:1760333938 nanos:172827573}" Oct 13 05:38:58.239882 containerd[1626]: time="2025-10-13T05:38:58.239791905Z" level=info msg="StartContainer for \"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\" returns successfully" Oct 13 05:38:58.261350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518-rootfs.mount: Deactivated successfully. Oct 13 05:38:58.585471 kubelet[2807]: E1013 05:38:58.585424 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:58.594866 containerd[1626]: time="2025-10-13T05:38:58.593811525Z" level=info msg="CreateContainer within sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 13 05:38:58.620610 containerd[1626]: time="2025-10-13T05:38:58.620548890Z" level=info msg="Container 98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:38:58.645788 containerd[1626]: time="2025-10-13T05:38:58.645388622Z" level=info msg="CreateContainer within sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\"" Oct 13 05:38:58.646315 containerd[1626]: time="2025-10-13T05:38:58.646283362Z" level=info msg="StartContainer for \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\"" Oct 13 05:38:58.647458 containerd[1626]: time="2025-10-13T05:38:58.647425185Z" level=info msg="connecting to shim 98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b" address="unix:///run/containerd/s/858c8e2078f0b3c63e22769554e9d535aba51f9ac7553adcfb3841b6516cea0f" protocol=ttrpc version=3 Oct 13 05:38:58.678124 systemd[1]: Started cri-containerd-98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b.scope - libcontainer container 98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b. Oct 13 05:38:58.722242 containerd[1626]: time="2025-10-13T05:38:58.722194414Z" level=info msg="StartContainer for \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" returns successfully" Oct 13 05:38:58.830721 containerd[1626]: time="2025-10-13T05:38:58.830594774Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" id:\"aeb91808454d1c8de8376a12f50152ca7d367316a8fdcd04f8d71c77d0990778\" pid:3494 exited_at:{seconds:1760333938 nanos:830178242}" Oct 13 05:38:58.877225 kubelet[2807]: I1013 05:38:58.877091 2807 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 05:38:58.939993 systemd[1]: Created slice kubepods-burstable-pod5bd1c6b4_17c2_4fd6_a92f_2b93614ac9f0.slice - libcontainer container kubepods-burstable-pod5bd1c6b4_17c2_4fd6_a92f_2b93614ac9f0.slice. Oct 13 05:38:58.948289 systemd[1]: Created slice kubepods-burstable-pod132e886a_a37d_4890_babd_b5457793003b.slice - libcontainer container kubepods-burstable-pod132e886a_a37d_4890_babd_b5457793003b.slice. Oct 13 05:38:59.034372 kubelet[2807]: I1013 05:38:59.034031 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/132e886a-a37d-4890-babd-b5457793003b-config-volume\") pod \"coredns-66bc5c9577-6fpzc\" (UID: \"132e886a-a37d-4890-babd-b5457793003b\") " pod="kube-system/coredns-66bc5c9577-6fpzc" Oct 13 05:38:59.034372 kubelet[2807]: I1013 05:38:59.034167 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bd1c6b4-17c2-4fd6-a92f-2b93614ac9f0-config-volume\") pod \"coredns-66bc5c9577-wx7hl\" (UID: \"5bd1c6b4-17c2-4fd6-a92f-2b93614ac9f0\") " pod="kube-system/coredns-66bc5c9577-wx7hl" Oct 13 05:38:59.034372 kubelet[2807]: I1013 05:38:59.034197 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntw7j\" (UniqueName: \"kubernetes.io/projected/5bd1c6b4-17c2-4fd6-a92f-2b93614ac9f0-kube-api-access-ntw7j\") pod \"coredns-66bc5c9577-wx7hl\" (UID: \"5bd1c6b4-17c2-4fd6-a92f-2b93614ac9f0\") " pod="kube-system/coredns-66bc5c9577-wx7hl" Oct 13 05:38:59.034372 kubelet[2807]: I1013 05:38:59.034299 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv2qc\" (UniqueName: \"kubernetes.io/projected/132e886a-a37d-4890-babd-b5457793003b-kube-api-access-fv2qc\") pod \"coredns-66bc5c9577-6fpzc\" (UID: \"132e886a-a37d-4890-babd-b5457793003b\") " pod="kube-system/coredns-66bc5c9577-6fpzc" Oct 13 05:38:59.354549 kubelet[2807]: E1013 05:38:59.354498 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:59.355577 containerd[1626]: time="2025-10-13T05:38:59.355534712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wx7hl,Uid:5bd1c6b4-17c2-4fd6-a92f-2b93614ac9f0,Namespace:kube-system,Attempt:0,}" Oct 13 05:38:59.397099 kubelet[2807]: E1013 05:38:59.397051 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:59.398567 containerd[1626]: time="2025-10-13T05:38:59.398416608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6fpzc,Uid:132e886a-a37d-4890-babd-b5457793003b,Namespace:kube-system,Attempt:0,}" Oct 13 05:38:59.600775 kubelet[2807]: E1013 05:38:59.600733 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:38:59.624803 kubelet[2807]: I1013 05:38:59.624203 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gc9bc" podStartSLOduration=8.735771649 podStartE2EDuration="25.624185996s" podCreationTimestamp="2025-10-13 05:38:34 +0000 UTC" firstStartedPulling="2025-10-13 05:38:37.121088616 +0000 UTC m=+7.920024852" lastFinishedPulling="2025-10-13 05:38:54.009502973 +0000 UTC m=+24.808439199" observedRunningTime="2025-10-13 05:38:59.622092037 +0000 UTC m=+30.421028293" watchObservedRunningTime="2025-10-13 05:38:59.624185996 +0000 UTC m=+30.423122262" Oct 13 05:39:00.602763 kubelet[2807]: E1013 05:39:00.602689 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:01.217253 systemd-networkd[1538]: cilium_host: Link UP Oct 13 05:39:01.217419 systemd-networkd[1538]: cilium_net: Link UP Oct 13 05:39:01.217611 systemd-networkd[1538]: cilium_net: Gained carrier Oct 13 05:39:01.217803 systemd-networkd[1538]: cilium_host: Gained carrier Oct 13 05:39:01.343852 systemd-networkd[1538]: cilium_vxlan: Link UP Oct 13 05:39:01.343867 systemd-networkd[1538]: cilium_vxlan: Gained carrier Oct 13 05:39:01.604799 kubelet[2807]: E1013 05:39:01.604715 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:01.616868 kernel: NET: Registered PF_ALG protocol family Oct 13 05:39:01.680104 systemd-networkd[1538]: cilium_net: Gained IPv6LL Oct 13 05:39:02.097875 systemd-networkd[1538]: cilium_host: Gained IPv6LL Oct 13 05:39:02.665138 systemd-networkd[1538]: lxc_health: Link UP Oct 13 05:39:02.668101 systemd-networkd[1538]: lxc_health: Gained carrier Oct 13 05:39:02.936155 systemd-networkd[1538]: lxc33679d562fe4: Link UP Oct 13 05:39:02.959011 kernel: eth0: renamed from tmp522f7 Oct 13 05:39:02.960777 systemd-networkd[1538]: lxc33679d562fe4: Gained carrier Oct 13 05:39:02.990959 kernel: eth0: renamed from tmp52886 Oct 13 05:39:02.992299 systemd-networkd[1538]: lxcd8fa168a18f6: Link UP Oct 13 05:39:02.992823 systemd-networkd[1538]: lxcd8fa168a18f6: Gained carrier Oct 13 05:39:03.056455 systemd-networkd[1538]: cilium_vxlan: Gained IPv6LL Oct 13 05:39:03.374185 kubelet[2807]: E1013 05:39:03.374116 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:03.612999 kubelet[2807]: E1013 05:39:03.612941 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:03.823080 systemd-networkd[1538]: lxc_health: Gained IPv6LL Oct 13 05:39:04.143159 systemd-networkd[1538]: lxcd8fa168a18f6: Gained IPv6LL Oct 13 05:39:04.591936 systemd-networkd[1538]: lxc33679d562fe4: Gained IPv6LL Oct 13 05:39:04.619454 kubelet[2807]: E1013 05:39:04.619007 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:07.235631 containerd[1626]: time="2025-10-13T05:39:07.235131403Z" level=info msg="connecting to shim 5288693b38adaec38c905499945d04aa95742f178f9444b08cd1bbe109e49944" address="unix:///run/containerd/s/b126b5a5929e8c798ea67331d241dff94d58949b1cf2067fe5846304268f83eb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:39:07.256721 containerd[1626]: time="2025-10-13T05:39:07.256561544Z" level=info msg="connecting to shim 522f78a2697e6c6a26a9e1a3b9b79ea513273a2a9a06c16e74e492c8ebb8e288" address="unix:///run/containerd/s/21859553e36bfd7bc431b75e65f5d94df9b8196bd2576c4c575c2ed253ad13b8" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:39:07.269068 systemd[1]: Started cri-containerd-5288693b38adaec38c905499945d04aa95742f178f9444b08cd1bbe109e49944.scope - libcontainer container 5288693b38adaec38c905499945d04aa95742f178f9444b08cd1bbe109e49944. Oct 13 05:39:07.292262 systemd[1]: Started cri-containerd-522f78a2697e6c6a26a9e1a3b9b79ea513273a2a9a06c16e74e492c8ebb8e288.scope - libcontainer container 522f78a2697e6c6a26a9e1a3b9b79ea513273a2a9a06c16e74e492c8ebb8e288. Oct 13 05:39:07.299363 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:39:07.312282 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:39:07.341876 containerd[1626]: time="2025-10-13T05:39:07.341733705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6fpzc,Uid:132e886a-a37d-4890-babd-b5457793003b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5288693b38adaec38c905499945d04aa95742f178f9444b08cd1bbe109e49944\"" Oct 13 05:39:07.342756 kubelet[2807]: E1013 05:39:07.342702 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:07.350246 containerd[1626]: time="2025-10-13T05:39:07.350079612Z" level=info msg="CreateContainer within sandbox \"5288693b38adaec38c905499945d04aa95742f178f9444b08cd1bbe109e49944\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:39:07.358821 containerd[1626]: time="2025-10-13T05:39:07.358755386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wx7hl,Uid:5bd1c6b4-17c2-4fd6-a92f-2b93614ac9f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"522f78a2697e6c6a26a9e1a3b9b79ea513273a2a9a06c16e74e492c8ebb8e288\"" Oct 13 05:39:07.360149 kubelet[2807]: E1013 05:39:07.359908 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:07.364670 containerd[1626]: time="2025-10-13T05:39:07.364641187Z" level=info msg="CreateContainer within sandbox \"522f78a2697e6c6a26a9e1a3b9b79ea513273a2a9a06c16e74e492c8ebb8e288\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:39:07.381427 containerd[1626]: time="2025-10-13T05:39:07.381360350Z" level=info msg="Container a69b715905cecff880e58d038021deeadcce7025481e85ffc2f60b1b8cf6c66b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:39:07.427049 containerd[1626]: time="2025-10-13T05:39:07.426974196Z" level=info msg="CreateContainer within sandbox \"5288693b38adaec38c905499945d04aa95742f178f9444b08cd1bbe109e49944\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a69b715905cecff880e58d038021deeadcce7025481e85ffc2f60b1b8cf6c66b\"" Oct 13 05:39:07.427687 containerd[1626]: time="2025-10-13T05:39:07.427639946Z" level=info msg="StartContainer for \"a69b715905cecff880e58d038021deeadcce7025481e85ffc2f60b1b8cf6c66b\"" Oct 13 05:39:07.428870 containerd[1626]: time="2025-10-13T05:39:07.428803829Z" level=info msg="connecting to shim a69b715905cecff880e58d038021deeadcce7025481e85ffc2f60b1b8cf6c66b" address="unix:///run/containerd/s/b126b5a5929e8c798ea67331d241dff94d58949b1cf2067fe5846304268f83eb" protocol=ttrpc version=3 Oct 13 05:39:07.448026 containerd[1626]: time="2025-10-13T05:39:07.447892787Z" level=info msg="Container 6b214f0e19e49229641d5de83101fb79a9492dd9ff43da81094aff4ba420ffd2: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:39:07.456063 containerd[1626]: time="2025-10-13T05:39:07.456017047Z" level=info msg="CreateContainer within sandbox \"522f78a2697e6c6a26a9e1a3b9b79ea513273a2a9a06c16e74e492c8ebb8e288\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b214f0e19e49229641d5de83101fb79a9492dd9ff43da81094aff4ba420ffd2\"" Oct 13 05:39:07.457415 containerd[1626]: time="2025-10-13T05:39:07.457379012Z" level=info msg="StartContainer for \"6b214f0e19e49229641d5de83101fb79a9492dd9ff43da81094aff4ba420ffd2\"" Oct 13 05:39:07.458896 systemd[1]: Started cri-containerd-a69b715905cecff880e58d038021deeadcce7025481e85ffc2f60b1b8cf6c66b.scope - libcontainer container a69b715905cecff880e58d038021deeadcce7025481e85ffc2f60b1b8cf6c66b. Oct 13 05:39:07.461303 containerd[1626]: time="2025-10-13T05:39:07.461263378Z" level=info msg="connecting to shim 6b214f0e19e49229641d5de83101fb79a9492dd9ff43da81094aff4ba420ffd2" address="unix:///run/containerd/s/21859553e36bfd7bc431b75e65f5d94df9b8196bd2576c4c575c2ed253ad13b8" protocol=ttrpc version=3 Oct 13 05:39:07.497122 systemd[1]: Started cri-containerd-6b214f0e19e49229641d5de83101fb79a9492dd9ff43da81094aff4ba420ffd2.scope - libcontainer container 6b214f0e19e49229641d5de83101fb79a9492dd9ff43da81094aff4ba420ffd2. Oct 13 05:39:07.530696 containerd[1626]: time="2025-10-13T05:39:07.530646012Z" level=info msg="StartContainer for \"a69b715905cecff880e58d038021deeadcce7025481e85ffc2f60b1b8cf6c66b\" returns successfully" Oct 13 05:39:07.547108 containerd[1626]: time="2025-10-13T05:39:07.547051967Z" level=info msg="StartContainer for \"6b214f0e19e49229641d5de83101fb79a9492dd9ff43da81094aff4ba420ffd2\" returns successfully" Oct 13 05:39:07.628708 kubelet[2807]: E1013 05:39:07.628645 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:07.631107 kubelet[2807]: E1013 05:39:07.631065 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:07.753931 kubelet[2807]: I1013 05:39:07.753120 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wx7hl" podStartSLOduration=33.753100438 podStartE2EDuration="33.753100438s" podCreationTimestamp="2025-10-13 05:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:39:07.730667066 +0000 UTC m=+38.529603332" watchObservedRunningTime="2025-10-13 05:39:07.753100438 +0000 UTC m=+38.552036704" Oct 13 05:39:07.753931 kubelet[2807]: I1013 05:39:07.753473 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6fpzc" podStartSLOduration=33.753466264 podStartE2EDuration="33.753466264s" podCreationTimestamp="2025-10-13 05:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:39:07.752949616 +0000 UTC m=+38.551885852" watchObservedRunningTime="2025-10-13 05:39:07.753466264 +0000 UTC m=+38.552402510" Oct 13 05:39:08.634864 kubelet[2807]: E1013 05:39:08.634807 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:08.635627 kubelet[2807]: E1013 05:39:08.635127 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:09.635746 kubelet[2807]: E1013 05:39:09.635685 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:09.635746 kubelet[2807]: E1013 05:39:09.635744 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:13.239041 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:52830.service - OpenSSH per-connection server daemon (10.0.0.1:52830). Oct 13 05:39:13.350403 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 52830 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:13.352650 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:13.359462 systemd-logind[1604]: New session 8 of user core. Oct 13 05:39:13.370181 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:39:13.887979 sshd[4137]: Connection closed by 10.0.0.1 port 52830 Oct 13 05:39:13.888593 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:13.899247 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:52830.service: Deactivated successfully. Oct 13 05:39:13.902205 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:39:13.903657 systemd-logind[1604]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:39:13.906078 systemd-logind[1604]: Removed session 8. Oct 13 05:39:18.911693 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:52834.service - OpenSSH per-connection server daemon (10.0.0.1:52834). Oct 13 05:39:18.987179 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 52834 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:18.989426 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:18.996709 systemd-logind[1604]: New session 9 of user core. Oct 13 05:39:19.008260 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:39:19.178872 sshd[4156]: Connection closed by 10.0.0.1 port 52834 Oct 13 05:39:19.182231 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:19.187415 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:52834.service: Deactivated successfully. Oct 13 05:39:19.190319 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:39:19.193662 systemd-logind[1604]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:39:19.195650 systemd-logind[1604]: Removed session 9. Oct 13 05:39:24.198907 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:41614.service - OpenSSH per-connection server daemon (10.0.0.1:41614). Oct 13 05:39:24.274595 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 41614 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:24.276658 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:24.281815 systemd-logind[1604]: New session 10 of user core. Oct 13 05:39:24.288809 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:39:24.421523 sshd[4174]: Connection closed by 10.0.0.1 port 41614 Oct 13 05:39:24.424042 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:24.431815 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:41614.service: Deactivated successfully. Oct 13 05:39:24.434575 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:39:24.435714 systemd-logind[1604]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:39:24.437236 systemd-logind[1604]: Removed session 10. Oct 13 05:39:29.453916 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:41622.service - OpenSSH per-connection server daemon (10.0.0.1:41622). Oct 13 05:39:29.530732 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 41622 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:29.533081 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:29.542246 systemd-logind[1604]: New session 11 of user core. Oct 13 05:39:29.556127 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:39:29.688912 sshd[4193]: Connection closed by 10.0.0.1 port 41622 Oct 13 05:39:29.689310 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:29.695182 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:41622.service: Deactivated successfully. Oct 13 05:39:29.697481 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:39:29.698498 systemd-logind[1604]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:39:29.700445 systemd-logind[1604]: Removed session 11. Oct 13 05:39:34.714780 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:48288.service - OpenSSH per-connection server daemon (10.0.0.1:48288). Oct 13 05:39:34.771512 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 48288 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:34.773577 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:34.779207 systemd-logind[1604]: New session 12 of user core. Oct 13 05:39:34.789147 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:39:34.916976 sshd[4210]: Connection closed by 10.0.0.1 port 48288 Oct 13 05:39:34.917401 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:34.929232 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:48288.service: Deactivated successfully. Oct 13 05:39:34.931811 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:39:34.933050 systemd-logind[1604]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:39:34.935625 systemd-logind[1604]: Removed session 12. Oct 13 05:39:34.937481 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:48296.service - OpenSSH per-connection server daemon (10.0.0.1:48296). Oct 13 05:39:35.001920 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 48296 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:35.003779 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:35.009736 systemd-logind[1604]: New session 13 of user core. Oct 13 05:39:35.017001 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:39:35.191801 sshd[4227]: Connection closed by 10.0.0.1 port 48296 Oct 13 05:39:35.192508 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:35.206232 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:48296.service: Deactivated successfully. Oct 13 05:39:35.211341 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:39:35.213969 systemd-logind[1604]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:39:35.217452 systemd-logind[1604]: Removed session 13. Oct 13 05:39:35.221156 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:48300.service - OpenSSH per-connection server daemon (10.0.0.1:48300). Oct 13 05:39:35.293049 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 48300 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:35.295375 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:35.302630 systemd-logind[1604]: New session 14 of user core. Oct 13 05:39:35.312022 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:39:35.481744 sshd[4241]: Connection closed by 10.0.0.1 port 48300 Oct 13 05:39:35.482182 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:35.486690 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:48300.service: Deactivated successfully. Oct 13 05:39:35.489802 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:39:35.492464 systemd-logind[1604]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:39:35.495065 systemd-logind[1604]: Removed session 14. Oct 13 05:39:37.416436 kubelet[2807]: E1013 05:39:37.416382 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:40.513697 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:48316.service - OpenSSH per-connection server daemon (10.0.0.1:48316). Oct 13 05:39:40.673172 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 48316 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:40.678179 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:40.696220 systemd-logind[1604]: New session 15 of user core. Oct 13 05:39:40.710203 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:39:40.974344 sshd[4260]: Connection closed by 10.0.0.1 port 48316 Oct 13 05:39:40.974575 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:40.980550 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:48316.service: Deactivated successfully. Oct 13 05:39:40.984669 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:39:40.988875 systemd-logind[1604]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:39:40.998639 systemd-logind[1604]: Removed session 15. Oct 13 05:39:45.994235 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:38288.service - OpenSSH per-connection server daemon (10.0.0.1:38288). Oct 13 05:39:46.067341 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 38288 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:46.069375 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:46.075101 systemd-logind[1604]: New session 16 of user core. Oct 13 05:39:46.090140 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:39:46.264437 sshd[4276]: Connection closed by 10.0.0.1 port 38288 Oct 13 05:39:46.264854 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:46.270441 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:38288.service: Deactivated successfully. Oct 13 05:39:46.273090 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:39:46.275003 systemd-logind[1604]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:39:46.276754 systemd-logind[1604]: Removed session 16. Oct 13 05:39:51.293279 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:38302.service - OpenSSH per-connection server daemon (10.0.0.1:38302). Oct 13 05:39:51.374979 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 38302 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:51.374950 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:51.391031 systemd-logind[1604]: New session 17 of user core. Oct 13 05:39:51.408244 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:39:51.584073 sshd[4293]: Connection closed by 10.0.0.1 port 38302 Oct 13 05:39:51.583218 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:51.603487 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:38302.service: Deactivated successfully. Oct 13 05:39:51.605957 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:39:51.607334 systemd-logind[1604]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:39:51.614946 systemd-logind[1604]: Removed session 17. Oct 13 05:39:51.616808 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:38312.service - OpenSSH per-connection server daemon (10.0.0.1:38312). Oct 13 05:39:51.680210 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 38312 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:51.682536 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:51.696399 systemd-logind[1604]: New session 18 of user core. Oct 13 05:39:51.712991 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:39:52.119299 sshd[4309]: Connection closed by 10.0.0.1 port 38312 Oct 13 05:39:52.119673 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:52.132765 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:38312.service: Deactivated successfully. Oct 13 05:39:52.137056 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:39:52.138621 systemd-logind[1604]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:39:52.142545 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:53876.service - OpenSSH per-connection server daemon (10.0.0.1:53876). Oct 13 05:39:52.143808 systemd-logind[1604]: Removed session 18. Oct 13 05:39:52.218409 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 53876 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:52.220167 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:52.225004 systemd-logind[1604]: New session 19 of user core. Oct 13 05:39:52.232981 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:39:52.415950 kubelet[2807]: E1013 05:39:52.415476 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:52.942648 sshd[4323]: Connection closed by 10.0.0.1 port 53876 Oct 13 05:39:52.943282 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:52.964611 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:53876.service: Deactivated successfully. Oct 13 05:39:52.967409 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:39:52.970416 systemd-logind[1604]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:39:52.973719 systemd-logind[1604]: Removed session 19. Oct 13 05:39:52.976366 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:53884.service - OpenSSH per-connection server daemon (10.0.0.1:53884). Oct 13 05:39:53.046675 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 53884 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:53.048873 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:53.057007 systemd-logind[1604]: New session 20 of user core. Oct 13 05:39:53.066129 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:39:53.516245 sshd[4343]: Connection closed by 10.0.0.1 port 53884 Oct 13 05:39:53.516602 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:53.530743 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:53884.service: Deactivated successfully. Oct 13 05:39:53.532821 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:39:53.533666 systemd-logind[1604]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:39:53.537194 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:53892.service - OpenSSH per-connection server daemon (10.0.0.1:53892). Oct 13 05:39:53.538036 systemd-logind[1604]: Removed session 20. Oct 13 05:39:53.595224 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 53892 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:53.596728 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:53.601439 systemd-logind[1604]: New session 21 of user core. Oct 13 05:39:53.614024 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:39:53.782187 sshd[4357]: Connection closed by 10.0.0.1 port 53892 Oct 13 05:39:53.782566 sshd-session[4354]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:53.790297 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:53892.service: Deactivated successfully. Oct 13 05:39:53.795774 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:39:53.797506 systemd-logind[1604]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:39:53.799272 systemd-logind[1604]: Removed session 21. Oct 13 05:39:56.416017 kubelet[2807]: E1013 05:39:56.415425 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:39:58.798386 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:53900.service - OpenSSH per-connection server daemon (10.0.0.1:53900). Oct 13 05:39:58.864124 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 53900 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:39:58.866130 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:58.871872 systemd-logind[1604]: New session 22 of user core. Oct 13 05:39:58.883126 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:39:59.091559 sshd[4375]: Connection closed by 10.0.0.1 port 53900 Oct 13 05:39:59.090517 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:59.099661 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:53900.service: Deactivated successfully. Oct 13 05:39:59.102440 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:39:59.104127 systemd-logind[1604]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:39:59.106323 systemd-logind[1604]: Removed session 22. Oct 13 05:40:04.109736 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:59514.service - OpenSSH per-connection server daemon (10.0.0.1:59514). Oct 13 05:40:04.185489 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 59514 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:40:04.187514 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:40:04.195288 systemd-logind[1604]: New session 23 of user core. Oct 13 05:40:04.209320 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:40:04.336322 sshd[4393]: Connection closed by 10.0.0.1 port 59514 Oct 13 05:40:04.336765 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Oct 13 05:40:04.343011 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:59514.service: Deactivated successfully. Oct 13 05:40:04.345885 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:40:04.346798 systemd-logind[1604]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:40:04.348799 systemd-logind[1604]: Removed session 23. Oct 13 05:40:06.414767 kubelet[2807]: E1013 05:40:06.414683 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:09.363162 systemd[1]: Started sshd@23-10.0.0.130:22-10.0.0.1:59528.service - OpenSSH per-connection server daemon (10.0.0.1:59528). Oct 13 05:40:09.416005 kubelet[2807]: E1013 05:40:09.415948 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:09.430891 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 59528 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:40:09.433091 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:40:09.444232 systemd-logind[1604]: New session 24 of user core. Oct 13 05:40:09.454167 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:40:09.625823 sshd[4412]: Connection closed by 10.0.0.1 port 59528 Oct 13 05:40:09.627100 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Oct 13 05:40:09.635955 systemd[1]: sshd@23-10.0.0.130:22-10.0.0.1:59528.service: Deactivated successfully. Oct 13 05:40:09.638299 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:40:09.639243 systemd-logind[1604]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:40:09.640782 systemd-logind[1604]: Removed session 24. Oct 13 05:40:11.415423 kubelet[2807]: E1013 05:40:11.415314 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:14.639851 systemd[1]: Started sshd@24-10.0.0.130:22-10.0.0.1:39704.service - OpenSSH per-connection server daemon (10.0.0.1:39704). Oct 13 05:40:14.701757 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 39704 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:40:14.704057 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:40:14.709373 systemd-logind[1604]: New session 25 of user core. Oct 13 05:40:14.725142 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 05:40:14.851919 sshd[4428]: Connection closed by 10.0.0.1 port 39704 Oct 13 05:40:14.854948 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Oct 13 05:40:14.867491 systemd[1]: sshd@24-10.0.0.130:22-10.0.0.1:39704.service: Deactivated successfully. Oct 13 05:40:14.869688 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 05:40:14.870732 systemd-logind[1604]: Session 25 logged out. Waiting for processes to exit. Oct 13 05:40:14.874531 systemd[1]: Started sshd@25-10.0.0.130:22-10.0.0.1:39718.service - OpenSSH per-connection server daemon (10.0.0.1:39718). Oct 13 05:40:14.875400 systemd-logind[1604]: Removed session 25. Oct 13 05:40:14.941248 sshd[4442]: Accepted publickey for core from 10.0.0.1 port 39718 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:40:14.943154 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:40:14.948122 systemd-logind[1604]: New session 26 of user core. Oct 13 05:40:14.958086 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 13 05:40:16.900724 containerd[1626]: time="2025-10-13T05:40:16.900575117Z" level=info msg="StopContainer for \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\" with timeout 30 (s)" Oct 13 05:40:16.949900 containerd[1626]: time="2025-10-13T05:40:16.949685551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" id:\"904f65b7e8961e213d238ac2f1aec7f193ba78aa82ed5163b56b4cc88154e226\" pid:4466 exited_at:{seconds:1760334016 nanos:949150682}" Oct 13 05:40:16.952412 containerd[1626]: time="2025-10-13T05:40:16.951487171Z" level=info msg="Stop container \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\" with signal terminated" Oct 13 05:40:16.953551 containerd[1626]: time="2025-10-13T05:40:16.953239136Z" level=info msg="StopContainer for \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" with timeout 2 (s)" Oct 13 05:40:16.953551 containerd[1626]: time="2025-10-13T05:40:16.953517993Z" level=info msg="Stop container \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" with signal terminated" Oct 13 05:40:16.963802 systemd-networkd[1538]: lxc_health: Link DOWN Oct 13 05:40:16.964830 systemd-networkd[1538]: lxc_health: Lost carrier Oct 13 05:40:16.978325 containerd[1626]: time="2025-10-13T05:40:16.978233126Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:40:16.980981 systemd[1]: cri-containerd-dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb.scope: Deactivated successfully. Oct 13 05:40:16.984237 containerd[1626]: time="2025-10-13T05:40:16.984069359Z" level=info msg="received exit event container_id:\"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\" id:\"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\" pid:3225 exited_at:{seconds:1760334016 nanos:983517778}" Oct 13 05:40:16.984405 containerd[1626]: time="2025-10-13T05:40:16.984348956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\" id:\"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\" pid:3225 exited_at:{seconds:1760334016 nanos:983517778}" Oct 13 05:40:16.999526 systemd[1]: cri-containerd-98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b.scope: Deactivated successfully. Oct 13 05:40:17.001403 containerd[1626]: time="2025-10-13T05:40:17.001345376Z" level=info msg="received exit event container_id:\"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" id:\"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" pid:3463 exited_at:{seconds:1760334017 nanos:676053}" Oct 13 05:40:17.001688 containerd[1626]: time="2025-10-13T05:40:17.001629041Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" id:\"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" pid:3463 exited_at:{seconds:1760334017 nanos:676053}" Oct 13 05:40:17.001904 systemd[1]: cri-containerd-98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b.scope: Consumed 8.456s CPU time, 128M memory peak, 132K read from disk, 13.3M written to disk. Oct 13 05:40:17.030268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb-rootfs.mount: Deactivated successfully. Oct 13 05:40:17.037415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b-rootfs.mount: Deactivated successfully. Oct 13 05:40:17.129697 containerd[1626]: time="2025-10-13T05:40:17.129624537Z" level=info msg="StopContainer for \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" returns successfully" Oct 13 05:40:17.131641 containerd[1626]: time="2025-10-13T05:40:17.131575528Z" level=info msg="StopContainer for \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\" returns successfully" Oct 13 05:40:17.137116 containerd[1626]: time="2025-10-13T05:40:17.137064894Z" level=info msg="StopPodSandbox for \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\"" Oct 13 05:40:17.156503 containerd[1626]: time="2025-10-13T05:40:17.155896623Z" level=info msg="StopPodSandbox for \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\"" Oct 13 05:40:17.156503 containerd[1626]: time="2025-10-13T05:40:17.156051035Z" level=info msg="Container to stop \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:40:17.159805 containerd[1626]: time="2025-10-13T05:40:17.159740556Z" level=info msg="Container to stop \"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:40:17.159805 containerd[1626]: time="2025-10-13T05:40:17.159801140Z" level=info msg="Container to stop \"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:40:17.159998 containerd[1626]: time="2025-10-13T05:40:17.159815417Z" level=info msg="Container to stop \"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:40:17.159998 containerd[1626]: time="2025-10-13T05:40:17.159848218Z" level=info msg="Container to stop \"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:40:17.159998 containerd[1626]: time="2025-10-13T05:40:17.159861925Z" level=info msg="Container to stop \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:40:17.167627 systemd[1]: cri-containerd-1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d.scope: Deactivated successfully. Oct 13 05:40:17.170752 containerd[1626]: time="2025-10-13T05:40:17.170629230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" id:\"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" pid:3020 exit_status:137 exited_at:{seconds:1760334017 nanos:169859668}" Oct 13 05:40:17.172345 systemd[1]: cri-containerd-8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84.scope: Deactivated successfully. Oct 13 05:40:17.209099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84-rootfs.mount: Deactivated successfully. Oct 13 05:40:17.214040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d-rootfs.mount: Deactivated successfully. Oct 13 05:40:17.216571 containerd[1626]: time="2025-10-13T05:40:17.216512756Z" level=info msg="shim disconnected" id=8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84 namespace=k8s.io Oct 13 05:40:17.216571 containerd[1626]: time="2025-10-13T05:40:17.216558112Z" level=warning msg="cleaning up after shim disconnected" id=8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84 namespace=k8s.io Oct 13 05:40:17.270499 containerd[1626]: time="2025-10-13T05:40:17.216569173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 05:40:17.270707 containerd[1626]: time="2025-10-13T05:40:17.254979591Z" level=info msg="shim disconnected" id=1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d namespace=k8s.io Oct 13 05:40:17.270707 containerd[1626]: time="2025-10-13T05:40:17.270576066Z" level=warning msg="cleaning up after shim disconnected" id=1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d namespace=k8s.io Oct 13 05:40:17.270707 containerd[1626]: time="2025-10-13T05:40:17.270599620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 05:40:17.338284 containerd[1626]: time="2025-10-13T05:40:17.336892143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\" id:\"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\" pid:2929 exit_status:137 exited_at:{seconds:1760334017 nanos:179738325}" Oct 13 05:40:17.339725 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d-shm.mount: Deactivated successfully. Oct 13 05:40:17.351726 containerd[1626]: time="2025-10-13T05:40:17.351551340Z" level=info msg="TearDown network for sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" successfully" Oct 13 05:40:17.351726 containerd[1626]: time="2025-10-13T05:40:17.351668291Z" level=info msg="received exit event sandbox_id:\"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\" exit_status:137 exited_at:{seconds:1760334017 nanos:179738325}" Oct 13 05:40:17.352285 containerd[1626]: time="2025-10-13T05:40:17.351674633Z" level=info msg="StopPodSandbox for \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" returns successfully" Oct 13 05:40:17.352285 containerd[1626]: time="2025-10-13T05:40:17.351697045Z" level=info msg="TearDown network for sandbox \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\" successfully" Oct 13 05:40:17.352285 containerd[1626]: time="2025-10-13T05:40:17.352000537Z" level=info msg="StopPodSandbox for \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\" returns successfully" Oct 13 05:40:17.352285 containerd[1626]: time="2025-10-13T05:40:17.352174205Z" level=info msg="received exit event sandbox_id:\"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" exit_status:137 exited_at:{seconds:1760334017 nanos:169859668}" Oct 13 05:40:17.441151 kubelet[2807]: I1013 05:40:17.440968 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e078b99f-9980-42be-8af6-381000d811cb-cilium-config-path\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.442061 kubelet[2807]: I1013 05:40:17.442013 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-cilium-cgroup\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.442061 kubelet[2807]: I1013 05:40:17.442048 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-lib-modules\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.442061 kubelet[2807]: I1013 05:40:17.442075 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e078b99f-9980-42be-8af6-381000d811cb-hubble-tls\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.442061 kubelet[2807]: I1013 05:40:17.442097 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9c4q\" (UniqueName: \"kubernetes.io/projected/e078b99f-9980-42be-8af6-381000d811cb-kube-api-access-g9c4q\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.442061 kubelet[2807]: I1013 05:40:17.442120 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-bpf-maps\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.442061 kubelet[2807]: I1013 05:40:17.442140 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/602b9dd7-dee3-4739-9811-81ecd15eb6d7-cilium-config-path\") pod \"602b9dd7-dee3-4739-9811-81ecd15eb6d7\" (UID: \"602b9dd7-dee3-4739-9811-81ecd15eb6d7\") " Oct 13 05:40:17.443220 kubelet[2807]: I1013 05:40:17.442162 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-cni-path\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.443220 kubelet[2807]: I1013 05:40:17.442180 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-cilium-run\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.443220 kubelet[2807]: I1013 05:40:17.442163 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:40:17.443367 kubelet[2807]: I1013 05:40:17.443258 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-xtables-lock\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.443367 kubelet[2807]: I1013 05:40:17.443287 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-hostproc\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.443367 kubelet[2807]: I1013 05:40:17.443317 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-host-proc-sys-kernel\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.443367 kubelet[2807]: I1013 05:40:17.443348 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e078b99f-9980-42be-8af6-381000d811cb-clustermesh-secrets\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.443506 kubelet[2807]: I1013 05:40:17.443370 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbmcx\" (UniqueName: \"kubernetes.io/projected/602b9dd7-dee3-4739-9811-81ecd15eb6d7-kube-api-access-pbmcx\") pod \"602b9dd7-dee3-4739-9811-81ecd15eb6d7\" (UID: \"602b9dd7-dee3-4739-9811-81ecd15eb6d7\") " Oct 13 05:40:17.443506 kubelet[2807]: I1013 05:40:17.443460 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-host-proc-sys-net\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.443592 kubelet[2807]: I1013 05:40:17.443532 2807 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.443592 kubelet[2807]: I1013 05:40:17.443564 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:40:17.443667 kubelet[2807]: I1013 05:40:17.443602 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:40:17.443905 kubelet[2807]: I1013 05:40:17.443868 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-hostproc" (OuterVolumeSpecName: "hostproc") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:40:17.444003 kubelet[2807]: I1013 05:40:17.443913 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-cni-path" (OuterVolumeSpecName: "cni-path") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:40:17.444003 kubelet[2807]: I1013 05:40:17.443935 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:40:17.444003 kubelet[2807]: I1013 05:40:17.443957 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:40:17.446050 kubelet[2807]: I1013 05:40:17.446013 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e078b99f-9980-42be-8af6-381000d811cb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:40:17.448103 kubelet[2807]: I1013 05:40:17.446730 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:40:17.448284 kubelet[2807]: I1013 05:40:17.447924 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/602b9dd7-dee3-4739-9811-81ecd15eb6d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "602b9dd7-dee3-4739-9811-81ecd15eb6d7" (UID: "602b9dd7-dee3-4739-9811-81ecd15eb6d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:40:17.448361 kubelet[2807]: I1013 05:40:17.448250 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:40:17.449040 kubelet[2807]: I1013 05:40:17.448979 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e078b99f-9980-42be-8af6-381000d811cb-kube-api-access-g9c4q" (OuterVolumeSpecName: "kube-api-access-g9c4q") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "kube-api-access-g9c4q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:40:17.451092 kubelet[2807]: I1013 05:40:17.451051 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e078b99f-9980-42be-8af6-381000d811cb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:40:17.451523 kubelet[2807]: I1013 05:40:17.451490 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602b9dd7-dee3-4739-9811-81ecd15eb6d7-kube-api-access-pbmcx" (OuterVolumeSpecName: "kube-api-access-pbmcx") pod "602b9dd7-dee3-4739-9811-81ecd15eb6d7" (UID: "602b9dd7-dee3-4739-9811-81ecd15eb6d7"). InnerVolumeSpecName "kube-api-access-pbmcx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:40:17.451969 kubelet[2807]: I1013 05:40:17.451932 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e078b99f-9980-42be-8af6-381000d811cb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:40:17.544577 kubelet[2807]: I1013 05:40:17.544505 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-etc-cni-netd\") pod \"e078b99f-9980-42be-8af6-381000d811cb\" (UID: \"e078b99f-9980-42be-8af6-381000d811cb\") " Oct 13 05:40:17.544577 kubelet[2807]: I1013 05:40:17.544600 2807 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.544577 kubelet[2807]: I1013 05:40:17.544617 2807 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.544577 kubelet[2807]: I1013 05:40:17.544630 2807 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.544577 kubelet[2807]: I1013 05:40:17.544643 2807 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.544964 kubelet[2807]: I1013 05:40:17.544654 2807 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e078b99f-9980-42be-8af6-381000d811cb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.544964 kubelet[2807]: I1013 05:40:17.544664 2807 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pbmcx\" (UniqueName: \"kubernetes.io/projected/602b9dd7-dee3-4739-9811-81ecd15eb6d7-kube-api-access-pbmcx\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.544964 kubelet[2807]: I1013 05:40:17.544675 2807 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.544964 kubelet[2807]: I1013 05:40:17.544686 2807 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e078b99f-9980-42be-8af6-381000d811cb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.544964 kubelet[2807]: I1013 05:40:17.544699 2807 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.544964 kubelet[2807]: I1013 05:40:17.544707 2807 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e078b99f-9980-42be-8af6-381000d811cb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.544964 kubelet[2807]: I1013 05:40:17.544714 2807 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g9c4q\" (UniqueName: \"kubernetes.io/projected/e078b99f-9980-42be-8af6-381000d811cb-kube-api-access-g9c4q\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.544964 kubelet[2807]: I1013 05:40:17.544721 2807 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.545143 kubelet[2807]: I1013 05:40:17.544729 2807 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/602b9dd7-dee3-4739-9811-81ecd15eb6d7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.545143 kubelet[2807]: I1013 05:40:17.544738 2807 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.545143 kubelet[2807]: I1013 05:40:17.544666 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e078b99f-9980-42be-8af6-381000d811cb" (UID: "e078b99f-9980-42be-8af6-381000d811cb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:40:17.646035 kubelet[2807]: I1013 05:40:17.645892 2807 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e078b99f-9980-42be-8af6-381000d811cb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 13 05:40:17.882620 kubelet[2807]: I1013 05:40:17.882557 2807 scope.go:117] "RemoveContainer" containerID="98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b" Oct 13 05:40:17.887317 containerd[1626]: time="2025-10-13T05:40:17.887189923Z" level=info msg="RemoveContainer for \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\"" Oct 13 05:40:17.893556 systemd[1]: Removed slice kubepods-burstable-pode078b99f_9980_42be_8af6_381000d811cb.slice - libcontainer container kubepods-burstable-pode078b99f_9980_42be_8af6_381000d811cb.slice. Oct 13 05:40:17.893871 systemd[1]: kubepods-burstable-pode078b99f_9980_42be_8af6_381000d811cb.slice: Consumed 8.638s CPU time, 128.3M memory peak, 140K read from disk, 13.3M written to disk. Oct 13 05:40:17.895556 systemd[1]: Removed slice kubepods-besteffort-pod602b9dd7_dee3_4739_9811_81ecd15eb6d7.slice - libcontainer container kubepods-besteffort-pod602b9dd7_dee3_4739_9811_81ecd15eb6d7.slice. Oct 13 05:40:17.899079 containerd[1626]: time="2025-10-13T05:40:17.898872474Z" level=info msg="RemoveContainer for \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" returns successfully" Oct 13 05:40:17.899946 kubelet[2807]: I1013 05:40:17.899194 2807 scope.go:117] "RemoveContainer" containerID="ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518" Oct 13 05:40:17.904086 containerd[1626]: time="2025-10-13T05:40:17.904038551Z" level=info msg="RemoveContainer for \"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\"" Oct 13 05:40:17.910721 containerd[1626]: time="2025-10-13T05:40:17.910651928Z" level=info msg="RemoveContainer for \"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\" returns successfully" Oct 13 05:40:17.911123 kubelet[2807]: I1013 05:40:17.911076 2807 scope.go:117] "RemoveContainer" containerID="19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5" Oct 13 05:40:17.914323 containerd[1626]: time="2025-10-13T05:40:17.914282538Z" level=info msg="RemoveContainer for \"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\"" Oct 13 05:40:17.932053 containerd[1626]: time="2025-10-13T05:40:17.932000836Z" level=info msg="RemoveContainer for \"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\" returns successfully" Oct 13 05:40:17.932399 kubelet[2807]: I1013 05:40:17.932346 2807 scope.go:117] "RemoveContainer" containerID="030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f" Oct 13 05:40:17.942521 containerd[1626]: time="2025-10-13T05:40:17.942475208Z" level=info msg="RemoveContainer for \"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\"" Oct 13 05:40:17.947031 containerd[1626]: time="2025-10-13T05:40:17.946985127Z" level=info msg="RemoveContainer for \"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\" returns successfully" Oct 13 05:40:17.947204 kubelet[2807]: I1013 05:40:17.947162 2807 scope.go:117] "RemoveContainer" containerID="3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4" Oct 13 05:40:17.948594 containerd[1626]: time="2025-10-13T05:40:17.948551072Z" level=info msg="RemoveContainer for \"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\"" Oct 13 05:40:17.952786 containerd[1626]: time="2025-10-13T05:40:17.952724976Z" level=info msg="RemoveContainer for \"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\" returns successfully" Oct 13 05:40:17.953070 kubelet[2807]: I1013 05:40:17.952921 2807 scope.go:117] "RemoveContainer" containerID="98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b" Oct 13 05:40:17.959112 containerd[1626]: time="2025-10-13T05:40:17.953109692Z" level=error msg="ContainerStatus for \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\": not found" Oct 13 05:40:17.961715 kubelet[2807]: E1013 05:40:17.961660 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\": not found" containerID="98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b" Oct 13 05:40:17.961796 kubelet[2807]: I1013 05:40:17.961712 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b"} err="failed to get container status \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\": rpc error: code = NotFound desc = an error occurred when try to find container \"98b851ecf438908c84ade8e713991a970fce64c6c8d843f33b1e18a0898bc35b\": not found" Oct 13 05:40:17.961796 kubelet[2807]: I1013 05:40:17.961767 2807 scope.go:117] "RemoveContainer" containerID="ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518" Oct 13 05:40:17.962238 containerd[1626]: time="2025-10-13T05:40:17.962102308Z" level=error msg="ContainerStatus for \"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\": not found" Oct 13 05:40:17.962313 kubelet[2807]: E1013 05:40:17.962231 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\": not found" containerID="ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518" Oct 13 05:40:17.962313 kubelet[2807]: I1013 05:40:17.962254 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518"} err="failed to get container status \"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\": rpc error: code = NotFound desc = an error occurred when try to find container \"ddf5d4f463be3724dbd22782521c392d3f80f91ccabde948ccb10f401379f518\": not found" Oct 13 05:40:17.962313 kubelet[2807]: I1013 05:40:17.962269 2807 scope.go:117] "RemoveContainer" containerID="19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5" Oct 13 05:40:17.962460 containerd[1626]: time="2025-10-13T05:40:17.962427392Z" level=error msg="ContainerStatus for \"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\": not found" Oct 13 05:40:17.962613 kubelet[2807]: E1013 05:40:17.962551 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\": not found" containerID="19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5" Oct 13 05:40:17.962613 kubelet[2807]: I1013 05:40:17.962604 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5"} err="failed to get container status \"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"19c1feeca0d3d99488207163518526d4bbd0c81baaf08d604fd81496143ee1b5\": not found" Oct 13 05:40:17.962720 kubelet[2807]: I1013 05:40:17.962623 2807 scope.go:117] "RemoveContainer" containerID="030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f" Oct 13 05:40:17.963092 containerd[1626]: time="2025-10-13T05:40:17.962824470Z" level=error msg="ContainerStatus for \"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\": not found" Oct 13 05:40:17.963295 kubelet[2807]: E1013 05:40:17.963260 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\": not found" containerID="030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f" Oct 13 05:40:17.963351 kubelet[2807]: I1013 05:40:17.963292 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f"} err="failed to get container status \"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\": rpc error: code = NotFound desc = an error occurred when try to find container \"030dc76b709512e1cf3a035002c8c4ccfff5a16904631cb8759403315fd4b27f\": not found" Oct 13 05:40:17.963351 kubelet[2807]: I1013 05:40:17.963310 2807 scope.go:117] "RemoveContainer" containerID="3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4" Oct 13 05:40:17.963541 containerd[1626]: time="2025-10-13T05:40:17.963505435Z" level=error msg="ContainerStatus for \"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\": not found" Oct 13 05:40:17.963719 kubelet[2807]: E1013 05:40:17.963678 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\": not found" containerID="3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4" Oct 13 05:40:17.963719 kubelet[2807]: I1013 05:40:17.963710 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4"} err="failed to get container status \"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e932a58c9d8612f1d166787f52837717ee2aee19499c1941a8eb53c967cd8c4\": not found" Oct 13 05:40:17.963802 kubelet[2807]: I1013 05:40:17.963730 2807 scope.go:117] "RemoveContainer" containerID="dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb" Oct 13 05:40:17.980912 containerd[1626]: time="2025-10-13T05:40:17.980827356Z" level=info msg="RemoveContainer for \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\"" Oct 13 05:40:17.987384 containerd[1626]: time="2025-10-13T05:40:17.987178288Z" level=info msg="RemoveContainer for \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\" returns successfully" Oct 13 05:40:17.987701 kubelet[2807]: I1013 05:40:17.987636 2807 scope.go:117] "RemoveContainer" containerID="dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb" Oct 13 05:40:17.988225 containerd[1626]: time="2025-10-13T05:40:17.988161393Z" level=error msg="ContainerStatus for \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\": not found" Oct 13 05:40:17.988454 kubelet[2807]: E1013 05:40:17.988410 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\": not found" containerID="dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb" Oct 13 05:40:17.988527 kubelet[2807]: I1013 05:40:17.988448 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb"} err="failed to get container status \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbc44c5831c6f0e7b57ec8039ec3b0a7286016c15a408cf6fd6942abab555ccb\": not found" Oct 13 05:40:18.031675 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84-shm.mount: Deactivated successfully. Oct 13 05:40:18.031908 systemd[1]: var-lib-kubelet-pods-602b9dd7\x2ddee3\x2d4739\x2d9811\x2d81ecd15eb6d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpbmcx.mount: Deactivated successfully. Oct 13 05:40:18.032022 systemd[1]: var-lib-kubelet-pods-e078b99f\x2d9980\x2d42be\x2d8af6\x2d381000d811cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg9c4q.mount: Deactivated successfully. Oct 13 05:40:18.032117 systemd[1]: var-lib-kubelet-pods-e078b99f\x2d9980\x2d42be\x2d8af6\x2d381000d811cb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 13 05:40:18.032222 systemd[1]: var-lib-kubelet-pods-e078b99f\x2d9980\x2d42be\x2d8af6\x2d381000d811cb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 13 05:40:18.629193 sshd[4445]: Connection closed by 10.0.0.1 port 39718 Oct 13 05:40:18.629693 sshd-session[4442]: pam_unix(sshd:session): session closed for user core Oct 13 05:40:18.641897 systemd[1]: sshd@25-10.0.0.130:22-10.0.0.1:39718.service: Deactivated successfully. Oct 13 05:40:18.644068 systemd[1]: session-26.scope: Deactivated successfully. Oct 13 05:40:18.644870 systemd-logind[1604]: Session 26 logged out. Waiting for processes to exit. Oct 13 05:40:18.649122 systemd[1]: Started sshd@26-10.0.0.130:22-10.0.0.1:39722.service - OpenSSH per-connection server daemon (10.0.0.1:39722). Oct 13 05:40:18.649752 systemd-logind[1604]: Removed session 26. Oct 13 05:40:18.706546 sshd[4597]: Accepted publickey for core from 10.0.0.1 port 39722 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:40:18.708179 sshd-session[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:40:18.712609 systemd-logind[1604]: New session 27 of user core. Oct 13 05:40:18.722994 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 13 05:40:19.415072 kubelet[2807]: E1013 05:40:19.415015 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:19.417682 kubelet[2807]: I1013 05:40:19.417604 2807 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602b9dd7-dee3-4739-9811-81ecd15eb6d7" path="/var/lib/kubelet/pods/602b9dd7-dee3-4739-9811-81ecd15eb6d7/volumes" Oct 13 05:40:19.419435 kubelet[2807]: I1013 05:40:19.419371 2807 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e078b99f-9980-42be-8af6-381000d811cb" path="/var/lib/kubelet/pods/e078b99f-9980-42be-8af6-381000d811cb/volumes" Oct 13 05:40:19.482026 sshd[4600]: Connection closed by 10.0.0.1 port 39722 Oct 13 05:40:19.483061 sshd-session[4597]: pam_unix(sshd:session): session closed for user core Oct 13 05:40:19.509526 systemd[1]: sshd@26-10.0.0.130:22-10.0.0.1:39722.service: Deactivated successfully. Oct 13 05:40:19.522703 kubelet[2807]: E1013 05:40:19.521384 2807 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 13 05:40:19.521877 systemd[1]: session-27.scope: Deactivated successfully. Oct 13 05:40:19.525464 systemd-logind[1604]: Session 27 logged out. Waiting for processes to exit. Oct 13 05:40:19.531813 systemd[1]: Started sshd@27-10.0.0.130:22-10.0.0.1:39732.service - OpenSSH per-connection server daemon (10.0.0.1:39732). Oct 13 05:40:19.534352 systemd-logind[1604]: Removed session 27. Oct 13 05:40:19.542265 systemd[1]: Created slice kubepods-burstable-pod019541a8_bf38_4ef5_adce_de647486dd44.slice - libcontainer container kubepods-burstable-pod019541a8_bf38_4ef5_adce_de647486dd44.slice. Oct 13 05:40:19.559022 kubelet[2807]: I1013 05:40:19.558943 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/019541a8-bf38-4ef5-adce-de647486dd44-hostproc\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559022 kubelet[2807]: I1013 05:40:19.559007 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/019541a8-bf38-4ef5-adce-de647486dd44-etc-cni-netd\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559022 kubelet[2807]: I1013 05:40:19.559033 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/019541a8-bf38-4ef5-adce-de647486dd44-bpf-maps\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559311 kubelet[2807]: I1013 05:40:19.559050 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/019541a8-bf38-4ef5-adce-de647486dd44-cni-path\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559311 kubelet[2807]: I1013 05:40:19.559069 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/019541a8-bf38-4ef5-adce-de647486dd44-lib-modules\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559311 kubelet[2807]: I1013 05:40:19.559094 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/019541a8-bf38-4ef5-adce-de647486dd44-cilium-ipsec-secrets\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559311 kubelet[2807]: I1013 05:40:19.559116 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/019541a8-bf38-4ef5-adce-de647486dd44-clustermesh-secrets\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559311 kubelet[2807]: I1013 05:40:19.559135 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/019541a8-bf38-4ef5-adce-de647486dd44-hubble-tls\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559311 kubelet[2807]: I1013 05:40:19.559151 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/019541a8-bf38-4ef5-adce-de647486dd44-cilium-run\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559508 kubelet[2807]: I1013 05:40:19.559166 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/019541a8-bf38-4ef5-adce-de647486dd44-cilium-config-path\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559508 kubelet[2807]: I1013 05:40:19.559181 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/019541a8-bf38-4ef5-adce-de647486dd44-host-proc-sys-net\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559508 kubelet[2807]: I1013 05:40:19.559198 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/019541a8-bf38-4ef5-adce-de647486dd44-cilium-cgroup\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559508 kubelet[2807]: I1013 05:40:19.559213 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/019541a8-bf38-4ef5-adce-de647486dd44-xtables-lock\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559508 kubelet[2807]: I1013 05:40:19.559227 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f6ph\" (UniqueName: \"kubernetes.io/projected/019541a8-bf38-4ef5-adce-de647486dd44-kube-api-access-4f6ph\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.559630 kubelet[2807]: I1013 05:40:19.559244 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/019541a8-bf38-4ef5-adce-de647486dd44-host-proc-sys-kernel\") pod \"cilium-64twh\" (UID: \"019541a8-bf38-4ef5-adce-de647486dd44\") " pod="kube-system/cilium-64twh" Oct 13 05:40:19.607092 sshd[4613]: Accepted publickey for core from 10.0.0.1 port 39732 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:40:19.609477 sshd-session[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:40:19.616004 systemd-logind[1604]: New session 28 of user core. Oct 13 05:40:19.627212 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 13 05:40:19.679193 sshd[4618]: Connection closed by 10.0.0.1 port 39732 Oct 13 05:40:19.679489 sshd-session[4613]: pam_unix(sshd:session): session closed for user core Oct 13 05:40:19.693440 systemd[1]: sshd@27-10.0.0.130:22-10.0.0.1:39732.service: Deactivated successfully. Oct 13 05:40:19.695893 systemd[1]: session-28.scope: Deactivated successfully. Oct 13 05:40:19.697056 systemd-logind[1604]: Session 28 logged out. Waiting for processes to exit. Oct 13 05:40:19.699458 systemd-logind[1604]: Removed session 28. Oct 13 05:40:19.701041 systemd[1]: Started sshd@28-10.0.0.130:22-10.0.0.1:39734.service - OpenSSH per-connection server daemon (10.0.0.1:39734). Oct 13 05:40:19.761639 sshd[4629]: Accepted publickey for core from 10.0.0.1 port 39734 ssh2: RSA SHA256:5o212/xpy7VgmsepM5qeyZudqYBR6YTTwZJe1bfuQPw Oct 13 05:40:19.763649 sshd-session[4629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:40:19.769708 systemd-logind[1604]: New session 29 of user core. Oct 13 05:40:19.777980 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 13 05:40:19.853458 kubelet[2807]: E1013 05:40:19.853060 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:19.853818 containerd[1626]: time="2025-10-13T05:40:19.853689858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-64twh,Uid:019541a8-bf38-4ef5-adce-de647486dd44,Namespace:kube-system,Attempt:0,}" Oct 13 05:40:19.882960 containerd[1626]: time="2025-10-13T05:40:19.882901560Z" level=info msg="connecting to shim ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40" address="unix:///run/containerd/s/25d01e3903795e34247ba57600133c4f8178e8ca8af914102ccf58a3c2c7dbee" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:40:19.915161 systemd[1]: Started cri-containerd-ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40.scope - libcontainer container ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40. Oct 13 05:40:19.950804 containerd[1626]: time="2025-10-13T05:40:19.950654183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-64twh,Uid:019541a8-bf38-4ef5-adce-de647486dd44,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40\"" Oct 13 05:40:19.951885 kubelet[2807]: E1013 05:40:19.951495 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:19.957946 containerd[1626]: time="2025-10-13T05:40:19.957873259Z" level=info msg="CreateContainer within sandbox \"ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 13 05:40:19.968053 containerd[1626]: time="2025-10-13T05:40:19.967979323Z" level=info msg="Container 4116f3ec3907ad070375fa172ea35dabac8e380bfe1ce7216c1e9667f8960667: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:40:19.976733 containerd[1626]: time="2025-10-13T05:40:19.976664956Z" level=info msg="CreateContainer within sandbox \"ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4116f3ec3907ad070375fa172ea35dabac8e380bfe1ce7216c1e9667f8960667\"" Oct 13 05:40:19.977540 containerd[1626]: time="2025-10-13T05:40:19.977490154Z" level=info msg="StartContainer for \"4116f3ec3907ad070375fa172ea35dabac8e380bfe1ce7216c1e9667f8960667\"" Oct 13 05:40:19.978689 containerd[1626]: time="2025-10-13T05:40:19.978591981Z" level=info msg="connecting to shim 4116f3ec3907ad070375fa172ea35dabac8e380bfe1ce7216c1e9667f8960667" address="unix:///run/containerd/s/25d01e3903795e34247ba57600133c4f8178e8ca8af914102ccf58a3c2c7dbee" protocol=ttrpc version=3 Oct 13 05:40:20.007161 systemd[1]: Started cri-containerd-4116f3ec3907ad070375fa172ea35dabac8e380bfe1ce7216c1e9667f8960667.scope - libcontainer container 4116f3ec3907ad070375fa172ea35dabac8e380bfe1ce7216c1e9667f8960667. Oct 13 05:40:20.054208 containerd[1626]: time="2025-10-13T05:40:20.054150058Z" level=info msg="StartContainer for \"4116f3ec3907ad070375fa172ea35dabac8e380bfe1ce7216c1e9667f8960667\" returns successfully" Oct 13 05:40:20.058975 systemd[1]: cri-containerd-4116f3ec3907ad070375fa172ea35dabac8e380bfe1ce7216c1e9667f8960667.scope: Deactivated successfully. Oct 13 05:40:20.061068 containerd[1626]: time="2025-10-13T05:40:20.061006852Z" level=info msg="received exit event container_id:\"4116f3ec3907ad070375fa172ea35dabac8e380bfe1ce7216c1e9667f8960667\" id:\"4116f3ec3907ad070375fa172ea35dabac8e380bfe1ce7216c1e9667f8960667\" pid:4697 exited_at:{seconds:1760334020 nanos:60640199}" Oct 13 05:40:20.061210 containerd[1626]: time="2025-10-13T05:40:20.061123611Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4116f3ec3907ad070375fa172ea35dabac8e380bfe1ce7216c1e9667f8960667\" id:\"4116f3ec3907ad070375fa172ea35dabac8e380bfe1ce7216c1e9667f8960667\" pid:4697 exited_at:{seconds:1760334020 nanos:60640199}" Oct 13 05:40:20.898990 kubelet[2807]: E1013 05:40:20.898940 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:20.903146 containerd[1626]: time="2025-10-13T05:40:20.903111705Z" level=info msg="CreateContainer within sandbox \"ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 13 05:40:20.910308 containerd[1626]: time="2025-10-13T05:40:20.910263454Z" level=info msg="Container 81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:40:20.917199 containerd[1626]: time="2025-10-13T05:40:20.916953653Z" level=info msg="CreateContainer within sandbox \"ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43\"" Oct 13 05:40:20.919989 containerd[1626]: time="2025-10-13T05:40:20.919949574Z" level=info msg="StartContainer for \"81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43\"" Oct 13 05:40:20.921119 containerd[1626]: time="2025-10-13T05:40:20.921061972Z" level=info msg="connecting to shim 81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43" address="unix:///run/containerd/s/25d01e3903795e34247ba57600133c4f8178e8ca8af914102ccf58a3c2c7dbee" protocol=ttrpc version=3 Oct 13 05:40:20.958992 systemd[1]: Started cri-containerd-81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43.scope - libcontainer container 81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43. Oct 13 05:40:20.996670 containerd[1626]: time="2025-10-13T05:40:20.996605187Z" level=info msg="StartContainer for \"81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43\" returns successfully" Oct 13 05:40:21.005640 systemd[1]: cri-containerd-81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43.scope: Deactivated successfully. Oct 13 05:40:21.007327 containerd[1626]: time="2025-10-13T05:40:21.007270883Z" level=info msg="received exit event container_id:\"81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43\" id:\"81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43\" pid:4743 exited_at:{seconds:1760334021 nanos:7067539}" Oct 13 05:40:21.007422 containerd[1626]: time="2025-10-13T05:40:21.007371973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43\" id:\"81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43\" pid:4743 exited_at:{seconds:1760334021 nanos:7067539}" Oct 13 05:40:21.033149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81e7fccc1587415ed86d543e2ded94faca262030d9bfc9dda6063f2d1ec77c43-rootfs.mount: Deactivated successfully. Oct 13 05:40:21.893760 kubelet[2807]: I1013 05:40:21.893681 2807 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T05:40:21Z","lastTransitionTime":"2025-10-13T05:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 13 05:40:21.903412 kubelet[2807]: E1013 05:40:21.903189 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:21.910704 containerd[1626]: time="2025-10-13T05:40:21.909940018Z" level=info msg="CreateContainer within sandbox \"ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 13 05:40:21.930600 containerd[1626]: time="2025-10-13T05:40:21.930467323Z" level=info msg="Container 424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:40:21.939966 containerd[1626]: time="2025-10-13T05:40:21.939911184Z" level=info msg="CreateContainer within sandbox \"ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf\"" Oct 13 05:40:21.942893 containerd[1626]: time="2025-10-13T05:40:21.941507173Z" level=info msg="StartContainer for \"424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf\"" Oct 13 05:40:21.943096 containerd[1626]: time="2025-10-13T05:40:21.943026308Z" level=info msg="connecting to shim 424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf" address="unix:///run/containerd/s/25d01e3903795e34247ba57600133c4f8178e8ca8af914102ccf58a3c2c7dbee" protocol=ttrpc version=3 Oct 13 05:40:21.971232 systemd[1]: Started cri-containerd-424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf.scope - libcontainer container 424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf. Oct 13 05:40:22.048572 systemd[1]: cri-containerd-424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf.scope: Deactivated successfully. Oct 13 05:40:22.051680 containerd[1626]: time="2025-10-13T05:40:22.050245783Z" level=info msg="StartContainer for \"424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf\" returns successfully" Oct 13 05:40:22.053215 containerd[1626]: time="2025-10-13T05:40:22.053115915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf\" id:\"424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf\" pid:4789 exited_at:{seconds:1760334022 nanos:52802184}" Oct 13 05:40:22.053215 containerd[1626]: time="2025-10-13T05:40:22.053223388Z" level=info msg="received exit event container_id:\"424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf\" id:\"424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf\" pid:4789 exited_at:{seconds:1760334022 nanos:52802184}" Oct 13 05:40:22.084800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-424e5402426ccbff22fccb23547db1c4f4388873079c69a94663c8985f6093cf-rootfs.mount: Deactivated successfully. Oct 13 05:40:22.922018 kubelet[2807]: E1013 05:40:22.921957 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:22.928984 containerd[1626]: time="2025-10-13T05:40:22.928924441Z" level=info msg="CreateContainer within sandbox \"ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 13 05:40:22.947864 containerd[1626]: time="2025-10-13T05:40:22.945731616Z" level=info msg="Container 4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:40:22.951721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366064888.mount: Deactivated successfully. Oct 13 05:40:22.958471 containerd[1626]: time="2025-10-13T05:40:22.958328461Z" level=info msg="CreateContainer within sandbox \"ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88\"" Oct 13 05:40:22.959633 containerd[1626]: time="2025-10-13T05:40:22.959605640Z" level=info msg="StartContainer for \"4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88\"" Oct 13 05:40:22.961113 containerd[1626]: time="2025-10-13T05:40:22.961084719Z" level=info msg="connecting to shim 4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88" address="unix:///run/containerd/s/25d01e3903795e34247ba57600133c4f8178e8ca8af914102ccf58a3c2c7dbee" protocol=ttrpc version=3 Oct 13 05:40:22.992110 systemd[1]: Started cri-containerd-4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88.scope - libcontainer container 4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88. Oct 13 05:40:23.029494 systemd[1]: cri-containerd-4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88.scope: Deactivated successfully. Oct 13 05:40:23.030273 containerd[1626]: time="2025-10-13T05:40:23.030236662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88\" id:\"4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88\" pid:4827 exited_at:{seconds:1760334023 nanos:29625099}" Oct 13 05:40:23.030975 containerd[1626]: time="2025-10-13T05:40:23.030942854Z" level=info msg="received exit event container_id:\"4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88\" id:\"4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88\" pid:4827 exited_at:{seconds:1760334023 nanos:29625099}" Oct 13 05:40:23.039104 containerd[1626]: time="2025-10-13T05:40:23.039062695Z" level=info msg="StartContainer for \"4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88\" returns successfully" Oct 13 05:40:23.056757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ab2b6d3dbae64a6fdbed77a609312d4b3d3c1f0532677d4efba48b762533e88-rootfs.mount: Deactivated successfully. Oct 13 05:40:23.929691 kubelet[2807]: E1013 05:40:23.929615 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:23.935749 containerd[1626]: time="2025-10-13T05:40:23.935688097Z" level=info msg="CreateContainer within sandbox \"ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 13 05:40:23.950636 containerd[1626]: time="2025-10-13T05:40:23.948865665Z" level=info msg="Container 588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:40:23.968950 containerd[1626]: time="2025-10-13T05:40:23.968869154Z" level=info msg="CreateContainer within sandbox \"ed7d2dcaa80b10bc87936a73d420773c106959951dfd0abc9b41c68f1e1dbe40\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551\"" Oct 13 05:40:23.969787 containerd[1626]: time="2025-10-13T05:40:23.969752891Z" level=info msg="StartContainer for \"588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551\"" Oct 13 05:40:23.971883 containerd[1626]: time="2025-10-13T05:40:23.971106233Z" level=info msg="connecting to shim 588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551" address="unix:///run/containerd/s/25d01e3903795e34247ba57600133c4f8178e8ca8af914102ccf58a3c2c7dbee" protocol=ttrpc version=3 Oct 13 05:40:24.002182 systemd[1]: Started cri-containerd-588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551.scope - libcontainer container 588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551. Oct 13 05:40:24.086581 containerd[1626]: time="2025-10-13T05:40:24.086519279Z" level=info msg="StartContainer for \"588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551\" returns successfully" Oct 13 05:40:24.192597 containerd[1626]: time="2025-10-13T05:40:24.192450993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551\" id:\"c935fdc666384355e133aa5fa18e1dafe41b86ca84a68220d8362c84d6c33c3d\" pid:4897 exited_at:{seconds:1760334024 nanos:192045639}" Oct 13 05:40:24.649872 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Oct 13 05:40:24.946709 kubelet[2807]: E1013 05:40:24.946517 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:24.971080 kubelet[2807]: I1013 05:40:24.970919 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-64twh" podStartSLOduration=5.970895248 podStartE2EDuration="5.970895248s" podCreationTimestamp="2025-10-13 05:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:40:24.970824845 +0000 UTC m=+115.769761121" watchObservedRunningTime="2025-10-13 05:40:24.970895248 +0000 UTC m=+115.769831484" Oct 13 05:40:25.950142 kubelet[2807]: E1013 05:40:25.950092 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:26.430878 containerd[1626]: time="2025-10-13T05:40:26.430741635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551\" id:\"7115fe6e0a29e659c70b7b9d863bef79213f9d2def76eb568a3ae1924b9ee71b\" pid:5023 exit_status:1 exited_at:{seconds:1760334026 nanos:429329703}" Oct 13 05:40:26.471943 kubelet[2807]: E1013 05:40:26.471871 2807 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:33192->127.0.0.1:43111: write tcp 127.0.0.1:33192->127.0.0.1:43111: write: broken pipe Oct 13 05:40:28.589264 containerd[1626]: time="2025-10-13T05:40:28.589201625Z" level=info msg="TaskExit event in podsandbox handler container_id:\"588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551\" id:\"7a36ffce73dcac19ffe17803a3c0a7d504a8feb9a307243c26a37d0e00ca6b65\" pid:5356 exit_status:1 exited_at:{seconds:1760334028 nanos:588769961}" Oct 13 05:40:28.709884 systemd-networkd[1538]: lxc_health: Link UP Oct 13 05:40:28.710757 systemd-networkd[1538]: lxc_health: Gained carrier Oct 13 05:40:29.365211 containerd[1626]: time="2025-10-13T05:40:29.365145262Z" level=info msg="StopPodSandbox for \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\"" Oct 13 05:40:29.365571 containerd[1626]: time="2025-10-13T05:40:29.365323699Z" level=info msg="TearDown network for sandbox \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\" successfully" Oct 13 05:40:29.365571 containerd[1626]: time="2025-10-13T05:40:29.365338887Z" level=info msg="StopPodSandbox for \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\" returns successfully" Oct 13 05:40:29.366109 containerd[1626]: time="2025-10-13T05:40:29.366072700Z" level=info msg="RemovePodSandbox for \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\"" Oct 13 05:40:29.366109 containerd[1626]: time="2025-10-13T05:40:29.366102246Z" level=info msg="Forcibly stopping sandbox \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\"" Oct 13 05:40:29.366243 containerd[1626]: time="2025-10-13T05:40:29.366166187Z" level=info msg="TearDown network for sandbox \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\" successfully" Oct 13 05:40:29.368310 containerd[1626]: time="2025-10-13T05:40:29.368272116Z" level=info msg="Ensure that sandbox 8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84 in task-service has been cleanup successfully" Oct 13 05:40:29.400811 containerd[1626]: time="2025-10-13T05:40:29.400734772Z" level=info msg="RemovePodSandbox \"8a959cf84193cfdb3c85fe0ea03150c64a1fc31c709e6e9c4a8103545348db84\" returns successfully" Oct 13 05:40:29.402174 containerd[1626]: time="2025-10-13T05:40:29.402094615Z" level=info msg="StopPodSandbox for \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\"" Oct 13 05:40:29.402417 containerd[1626]: time="2025-10-13T05:40:29.402387196Z" level=info msg="TearDown network for sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" successfully" Oct 13 05:40:29.402417 containerd[1626]: time="2025-10-13T05:40:29.402412323Z" level=info msg="StopPodSandbox for \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" returns successfully" Oct 13 05:40:29.402989 containerd[1626]: time="2025-10-13T05:40:29.402961720Z" level=info msg="RemovePodSandbox for \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\"" Oct 13 05:40:29.403044 containerd[1626]: time="2025-10-13T05:40:29.402992567Z" level=info msg="Forcibly stopping sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\"" Oct 13 05:40:29.403117 containerd[1626]: time="2025-10-13T05:40:29.403092686Z" level=info msg="TearDown network for sandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" successfully" Oct 13 05:40:29.405374 containerd[1626]: time="2025-10-13T05:40:29.405323401Z" level=info msg="Ensure that sandbox 1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d in task-service has been cleanup successfully" Oct 13 05:40:29.411509 containerd[1626]: time="2025-10-13T05:40:29.411206059Z" level=info msg="RemovePodSandbox \"1d0e64739aa073d0d1592663fd0c797ce025895cb99d6a43cc62752476f1d37d\" returns successfully" Oct 13 05:40:29.852648 kubelet[2807]: E1013 05:40:29.852601 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:29.961230 kubelet[2807]: E1013 05:40:29.961185 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:30.671365 systemd-networkd[1538]: lxc_health: Gained IPv6LL Oct 13 05:40:30.748239 containerd[1626]: time="2025-10-13T05:40:30.748167669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551\" id:\"45d4c76171aa88c24f21020dd91e52c4dbbeb63c40f46ba1b5a1eadd34f1e0a7\" pid:5460 exited_at:{seconds:1760334030 nanos:747436090}" Oct 13 05:40:30.964011 kubelet[2807]: E1013 05:40:30.963773 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:40:32.896399 containerd[1626]: time="2025-10-13T05:40:32.896328933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551\" id:\"14553de7add06943ad3879e88415875ee656cdde37eaebb0142dfc08f474eb0c\" pid:5487 exited_at:{seconds:1760334032 nanos:895726628}" Oct 13 05:40:35.044517 containerd[1626]: time="2025-10-13T05:40:35.044430012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"588521e52d002623930f7baa2402264bd6940f585ec012fbe9b5a90e25225551\" id:\"9a49ee10bf94dc91b4683a3793a435b93a99d65b26d3d49e8fbd11dc81064da8\" pid:5517 exited_at:{seconds:1760334035 nanos:43984653}" Oct 13 05:40:35.051589 sshd[4632]: Connection closed by 10.0.0.1 port 39734 Oct 13 05:40:35.052185 sshd-session[4629]: pam_unix(sshd:session): session closed for user core Oct 13 05:40:35.056899 systemd[1]: sshd@28-10.0.0.130:22-10.0.0.1:39734.service: Deactivated successfully. Oct 13 05:40:35.059056 systemd[1]: session-29.scope: Deactivated successfully. Oct 13 05:40:35.060020 systemd-logind[1604]: Session 29 logged out. Waiting for processes to exit. Oct 13 05:40:35.061274 systemd-logind[1604]: Removed session 29.