Jul 11 05:23:32.801668 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jul 11 03:36:05 -00 2025 Jul 11 05:23:32.801697 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 05:23:32.801709 kernel: BIOS-provided physical RAM map: Jul 11 05:23:32.801718 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 11 05:23:32.801726 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 11 05:23:32.801734 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 11 05:23:32.801745 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 11 05:23:32.801756 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 11 05:23:32.801765 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 11 05:23:32.801773 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 11 05:23:32.801782 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 05:23:32.801791 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 11 05:23:32.801799 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 05:23:32.801808 kernel: NX (Execute Disable) protection: active Jul 11 05:23:32.801821 kernel: APIC: Static calls initialized Jul 11 05:23:32.801840 kernel: SMBIOS 2.8 present. Jul 11 05:23:32.801858 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 11 05:23:32.801868 kernel: DMI: Memory slots populated: 1/1 Jul 11 05:23:32.801878 kernel: Hypervisor detected: KVM Jul 11 05:23:32.801887 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 05:23:32.801896 kernel: kvm-clock: using sched offset of 3273723821 cycles Jul 11 05:23:32.801906 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 05:23:32.801916 kernel: tsc: Detected 2794.748 MHz processor Jul 11 05:23:32.801929 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 05:23:32.801939 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 05:23:32.801949 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 11 05:23:32.801958 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 11 05:23:32.801968 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 05:23:32.801977 kernel: Using GB pages for direct mapping Jul 11 05:23:32.801987 kernel: ACPI: Early table checksum verification disabled Jul 11 05:23:32.801997 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 11 05:23:32.802007 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:23:32.802019 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:23:32.802029 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:23:32.802039 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 11 05:23:32.802049 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:23:32.802058 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:23:32.802068 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:23:32.802078 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:23:32.802088 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 11 05:23:32.802104 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 11 05:23:32.802114 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 11 05:23:32.802124 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 11 05:23:32.802134 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 11 05:23:32.802144 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 11 05:23:32.802154 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 11 05:23:32.802166 kernel: No NUMA configuration found Jul 11 05:23:32.802176 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 11 05:23:32.802187 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 11 05:23:32.802197 kernel: Zone ranges: Jul 11 05:23:32.802207 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 05:23:32.802217 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 11 05:23:32.802227 kernel: Normal empty Jul 11 05:23:32.802237 kernel: Device empty Jul 11 05:23:32.802247 kernel: Movable zone start for each node Jul 11 05:23:32.802257 kernel: Early memory node ranges Jul 11 05:23:32.802270 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 11 05:23:32.802280 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 11 05:23:32.802290 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 11 05:23:32.802300 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 05:23:32.802310 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 11 05:23:32.802320 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 11 05:23:32.802330 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 05:23:32.802340 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 05:23:32.802351 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 05:23:32.802363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 05:23:32.802373 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 05:23:32.802383 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 05:23:32.802410 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 05:23:32.802420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 05:23:32.802431 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 05:23:32.802441 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 05:23:32.802452 kernel: TSC deadline timer available Jul 11 05:23:32.802462 kernel: CPU topo: Max. logical packages: 1 Jul 11 05:23:32.802475 kernel: CPU topo: Max. logical dies: 1 Jul 11 05:23:32.802485 kernel: CPU topo: Max. dies per package: 1 Jul 11 05:23:32.802495 kernel: CPU topo: Max. threads per core: 1 Jul 11 05:23:32.802505 kernel: CPU topo: Num. cores per package: 4 Jul 11 05:23:32.802515 kernel: CPU topo: Num. threads per package: 4 Jul 11 05:23:32.802525 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 11 05:23:32.802535 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 05:23:32.802545 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 05:23:32.802555 kernel: kvm-guest: setup PV sched yield Jul 11 05:23:32.802565 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 11 05:23:32.802577 kernel: Booting paravirtualized kernel on KVM Jul 11 05:23:32.802588 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 05:23:32.802598 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 05:23:32.802619 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 11 05:23:32.802630 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 11 05:23:32.802640 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 05:23:32.802650 kernel: kvm-guest: PV spinlocks enabled Jul 11 05:23:32.802660 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 05:23:32.802672 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 05:23:32.802685 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 05:23:32.802695 kernel: random: crng init done Jul 11 05:23:32.802705 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 05:23:32.802716 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 05:23:32.802726 kernel: Fallback order for Node 0: 0 Jul 11 05:23:32.802736 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 11 05:23:32.802746 kernel: Policy zone: DMA32 Jul 11 05:23:32.802756 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 05:23:32.802769 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 05:23:32.802779 kernel: ftrace: allocating 40097 entries in 157 pages Jul 11 05:23:32.802788 kernel: ftrace: allocated 157 pages with 5 groups Jul 11 05:23:32.802798 kernel: Dynamic Preempt: voluntary Jul 11 05:23:32.802808 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 05:23:32.802819 kernel: rcu: RCU event tracing is enabled. Jul 11 05:23:32.802829 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 05:23:32.802839 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 05:23:32.802850 kernel: Rude variant of Tasks RCU enabled. Jul 11 05:23:32.802862 kernel: Tracing variant of Tasks RCU enabled. Jul 11 05:23:32.802872 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 05:23:32.802882 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 05:23:32.802892 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 05:23:32.802902 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 05:23:32.802912 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 05:23:32.802922 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 05:23:32.802932 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 05:23:32.802953 kernel: Console: colour VGA+ 80x25 Jul 11 05:23:32.802963 kernel: printk: legacy console [ttyS0] enabled Jul 11 05:23:32.802973 kernel: ACPI: Core revision 20240827 Jul 11 05:23:32.802984 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 05:23:32.802996 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 05:23:32.803007 kernel: x2apic enabled Jul 11 05:23:32.803017 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 05:23:32.803028 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 05:23:32.803055 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 05:23:32.803077 kernel: kvm-guest: setup PV IPIs Jul 11 05:23:32.803087 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 05:23:32.803098 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 11 05:23:32.803109 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 11 05:23:32.803119 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 05:23:32.803133 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 05:23:32.803144 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 05:23:32.803155 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 05:23:32.803168 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 05:23:32.803178 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 05:23:32.803189 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 05:23:32.803200 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 05:23:32.803212 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 05:23:32.803224 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 05:23:32.803236 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 05:23:32.803247 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 05:23:32.803257 kernel: x86/bugs: return thunk changed Jul 11 05:23:32.803270 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 05:23:32.803280 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 05:23:32.803291 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 05:23:32.803301 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 05:23:32.803311 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 05:23:32.803322 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 05:23:32.803333 kernel: Freeing SMP alternatives memory: 32K Jul 11 05:23:32.803343 kernel: pid_max: default: 32768 minimum: 301 Jul 11 05:23:32.803354 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 11 05:23:32.803367 kernel: landlock: Up and running. Jul 11 05:23:32.803378 kernel: SELinux: Initializing. Jul 11 05:23:32.803409 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 05:23:32.803421 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 05:23:32.803432 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 05:23:32.803443 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 05:23:32.803453 kernel: ... version: 0 Jul 11 05:23:32.803464 kernel: ... bit width: 48 Jul 11 05:23:32.803474 kernel: ... generic registers: 6 Jul 11 05:23:32.803488 kernel: ... value mask: 0000ffffffffffff Jul 11 05:23:32.803499 kernel: ... max period: 00007fffffffffff Jul 11 05:23:32.803509 kernel: ... fixed-purpose events: 0 Jul 11 05:23:32.803520 kernel: ... event mask: 000000000000003f Jul 11 05:23:32.803530 kernel: signal: max sigframe size: 1776 Jul 11 05:23:32.803541 kernel: rcu: Hierarchical SRCU implementation. Jul 11 05:23:32.803552 kernel: rcu: Max phase no-delay instances is 400. Jul 11 05:23:32.803563 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 11 05:23:32.803573 kernel: smp: Bringing up secondary CPUs ... Jul 11 05:23:32.803586 kernel: smpboot: x86: Booting SMP configuration: Jul 11 05:23:32.803597 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 05:23:32.803607 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 05:23:32.803628 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 11 05:23:32.803639 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54620K init, 2348K bss, 136904K reserved, 0K cma-reserved) Jul 11 05:23:32.803650 kernel: devtmpfs: initialized Jul 11 05:23:32.803661 kernel: x86/mm: Memory block size: 128MB Jul 11 05:23:32.803671 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 05:23:32.803682 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 05:23:32.803695 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 05:23:32.803706 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 05:23:32.803717 kernel: audit: initializing netlink subsys (disabled) Jul 11 05:23:32.803727 kernel: audit: type=2000 audit(1752211409.688:1): state=initialized audit_enabled=0 res=1 Jul 11 05:23:32.803738 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 05:23:32.803748 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 05:23:32.803759 kernel: cpuidle: using governor menu Jul 11 05:23:32.803770 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 05:23:32.803780 kernel: dca service started, version 1.12.1 Jul 11 05:23:32.803793 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 11 05:23:32.803804 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 11 05:23:32.803815 kernel: PCI: Using configuration type 1 for base access Jul 11 05:23:32.803826 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 05:23:32.803836 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 05:23:32.803847 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 05:23:32.803858 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 05:23:32.803868 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 05:23:32.803879 kernel: ACPI: Added _OSI(Module Device) Jul 11 05:23:32.803892 kernel: ACPI: Added _OSI(Processor Device) Jul 11 05:23:32.803903 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 05:23:32.803923 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 05:23:32.803942 kernel: ACPI: Interpreter enabled Jul 11 05:23:32.803953 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 05:23:32.803964 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 05:23:32.803975 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 05:23:32.803989 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 05:23:32.804000 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 05:23:32.804013 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 05:23:32.804227 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 05:23:32.804373 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 05:23:32.804545 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 05:23:32.804561 kernel: PCI host bridge to bus 0000:00 Jul 11 05:23:32.804717 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 05:23:32.804853 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 05:23:32.804983 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 05:23:32.805119 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 11 05:23:32.805265 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 05:23:32.805424 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 11 05:23:32.805567 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 05:23:32.805752 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 11 05:23:32.805917 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 11 05:23:32.806066 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 11 05:23:32.806215 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 11 05:23:32.806368 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 11 05:23:32.806546 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 05:23:32.806722 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 05:23:32.806877 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 11 05:23:32.807034 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 11 05:23:32.807185 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 11 05:23:32.807318 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 11 05:23:32.807484 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 11 05:23:32.807604 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 11 05:23:32.807732 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 11 05:23:32.807898 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 11 05:23:32.808049 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 11 05:23:32.808171 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 11 05:23:32.808286 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 11 05:23:32.808467 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 11 05:23:32.808641 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 11 05:23:32.808786 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 05:23:32.808948 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 11 05:23:32.809093 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 11 05:23:32.809238 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 11 05:23:32.809412 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 11 05:23:32.809565 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 11 05:23:32.809580 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 05:23:32.809592 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 05:23:32.809607 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 05:23:32.809627 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 05:23:32.809638 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 05:23:32.809649 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 05:23:32.809660 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 05:23:32.809670 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 05:23:32.809681 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 05:23:32.809692 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 05:23:32.809702 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 05:23:32.809716 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 05:23:32.809727 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 05:23:32.809737 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 05:23:32.809748 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 05:23:32.809759 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 05:23:32.809769 kernel: iommu: Default domain type: Translated Jul 11 05:23:32.809780 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 05:23:32.809790 kernel: PCI: Using ACPI for IRQ routing Jul 11 05:23:32.809801 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 05:23:32.809814 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 11 05:23:32.809825 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 11 05:23:32.809979 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 05:23:32.810128 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 05:23:32.810274 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 05:23:32.810289 kernel: vgaarb: loaded Jul 11 05:23:32.810300 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 05:23:32.810311 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 05:23:32.810325 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 05:23:32.810336 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 05:23:32.810352 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 05:23:32.810363 kernel: pnp: PnP ACPI init Jul 11 05:23:32.810549 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 11 05:23:32.810567 kernel: pnp: PnP ACPI: found 6 devices Jul 11 05:23:32.810578 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 05:23:32.810588 kernel: NET: Registered PF_INET protocol family Jul 11 05:23:32.810599 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 05:23:32.810626 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 05:23:32.810637 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 05:23:32.810647 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 05:23:32.810658 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 05:23:32.810668 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 05:23:32.810679 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 05:23:32.810689 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 05:23:32.810700 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 05:23:32.810713 kernel: NET: Registered PF_XDP protocol family Jul 11 05:23:32.810851 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 05:23:32.810985 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 05:23:32.811118 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 05:23:32.811261 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 11 05:23:32.811422 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 11 05:23:32.811578 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 11 05:23:32.811593 kernel: PCI: CLS 0 bytes, default 64 Jul 11 05:23:32.811605 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 11 05:23:32.811629 kernel: Initialise system trusted keyrings Jul 11 05:23:32.811640 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 05:23:32.811651 kernel: Key type asymmetric registered Jul 11 05:23:32.811661 kernel: Asymmetric key parser 'x509' registered Jul 11 05:23:32.811672 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 05:23:32.811683 kernel: io scheduler mq-deadline registered Jul 11 05:23:32.811693 kernel: io scheduler kyber registered Jul 11 05:23:32.811704 kernel: io scheduler bfq registered Jul 11 05:23:32.811714 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 05:23:32.811729 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 05:23:32.811740 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 05:23:32.811750 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 05:23:32.811761 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 05:23:32.811772 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 05:23:32.811783 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 05:23:32.811793 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 05:23:32.811803 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 05:23:32.811814 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 05:23:32.811969 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 05:23:32.812108 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 05:23:32.812247 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T05:23:32 UTC (1752211412) Jul 11 05:23:32.812386 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 11 05:23:32.812417 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 05:23:32.812428 kernel: NET: Registered PF_INET6 protocol family Jul 11 05:23:32.812439 kernel: Segment Routing with IPv6 Jul 11 05:23:32.812450 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 05:23:32.812465 kernel: NET: Registered PF_PACKET protocol family Jul 11 05:23:32.812476 kernel: Key type dns_resolver registered Jul 11 05:23:32.812486 kernel: IPI shorthand broadcast: enabled Jul 11 05:23:32.812497 kernel: sched_clock: Marking stable (2738002020, 106741851)->(2908548907, -63805036) Jul 11 05:23:32.812507 kernel: registered taskstats version 1 Jul 11 05:23:32.812518 kernel: Loading compiled-in X.509 certificates Jul 11 05:23:32.812529 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 9703a4b3d6547675037b9597aa24472a5380cc2e' Jul 11 05:23:32.812539 kernel: Demotion targets for Node 0: null Jul 11 05:23:32.812550 kernel: Key type .fscrypt registered Jul 11 05:23:32.812563 kernel: Key type fscrypt-provisioning registered Jul 11 05:23:32.812573 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 05:23:32.812584 kernel: ima: Allocated hash algorithm: sha1 Jul 11 05:23:32.812595 kernel: ima: No architecture policies found Jul 11 05:23:32.812605 kernel: clk: Disabling unused clocks Jul 11 05:23:32.812625 kernel: Warning: unable to open an initial console. Jul 11 05:23:32.812636 kernel: Freeing unused kernel image (initmem) memory: 54620K Jul 11 05:23:32.812647 kernel: Write protecting the kernel read-only data: 24576k Jul 11 05:23:32.812661 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 11 05:23:32.812672 kernel: Run /init as init process Jul 11 05:23:32.812682 kernel: with arguments: Jul 11 05:23:32.812693 kernel: /init Jul 11 05:23:32.812703 kernel: with environment: Jul 11 05:23:32.812714 kernel: HOME=/ Jul 11 05:23:32.812724 kernel: TERM=linux Jul 11 05:23:32.812734 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 05:23:32.812746 systemd[1]: Successfully made /usr/ read-only. Jul 11 05:23:32.812764 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 05:23:32.812791 systemd[1]: Detected virtualization kvm. Jul 11 05:23:32.812802 systemd[1]: Detected architecture x86-64. Jul 11 05:23:32.812814 systemd[1]: Running in initrd. Jul 11 05:23:32.812825 systemd[1]: No hostname configured, using default hostname. Jul 11 05:23:32.812840 systemd[1]: Hostname set to . Jul 11 05:23:32.812851 systemd[1]: Initializing machine ID from VM UUID. Jul 11 05:23:32.812863 systemd[1]: Queued start job for default target initrd.target. Jul 11 05:23:32.812874 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 05:23:32.812886 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 05:23:32.812899 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 05:23:32.812910 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 05:23:32.812922 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 05:23:32.812937 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 05:23:32.812950 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 05:23:32.812962 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 05:23:32.812974 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 05:23:32.812986 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 05:23:32.812997 systemd[1]: Reached target paths.target - Path Units. Jul 11 05:23:32.813009 systemd[1]: Reached target slices.target - Slice Units. Jul 11 05:23:32.813023 systemd[1]: Reached target swap.target - Swaps. Jul 11 05:23:32.813034 systemd[1]: Reached target timers.target - Timer Units. Jul 11 05:23:32.813046 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 05:23:32.813058 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 05:23:32.813069 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 05:23:32.813082 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 11 05:23:32.813093 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 05:23:32.813105 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 05:23:32.813117 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 05:23:32.813131 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 05:23:32.813143 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 05:23:32.813155 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 05:23:32.813166 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 05:23:32.813179 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 11 05:23:32.813196 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 05:23:32.813208 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 05:23:32.813222 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 05:23:32.813235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:23:32.813248 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 05:23:32.813260 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 05:23:32.813274 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 05:23:32.813312 systemd-journald[220]: Collecting audit messages is disabled. Jul 11 05:23:32.813344 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 05:23:32.813357 systemd-journald[220]: Journal started Jul 11 05:23:32.813381 systemd-journald[220]: Runtime Journal (/run/log/journal/b94bd92bcb1d45fe9ffa91a10887827d) is 6M, max 48.6M, 42.5M free. Jul 11 05:23:32.801902 systemd-modules-load[221]: Inserted module 'overlay' Jul 11 05:23:32.844756 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 05:23:32.844783 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 05:23:32.844804 kernel: Bridge firewalling registered Jul 11 05:23:32.829440 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 11 05:23:32.845064 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 05:23:32.848643 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:23:32.850746 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 05:23:32.854679 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 05:23:32.858000 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 05:23:32.860570 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 05:23:32.870082 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 05:23:32.879796 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:23:32.880332 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 05:23:32.882148 systemd-tmpfiles[240]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 11 05:23:32.886752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 05:23:32.888747 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 05:23:32.907509 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 05:23:32.909741 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 05:23:32.933191 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 05:23:32.946050 systemd-resolved[258]: Positive Trust Anchors: Jul 11 05:23:32.946065 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 05:23:32.946102 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 05:23:32.948958 systemd-resolved[258]: Defaulting to hostname 'linux'. Jul 11 05:23:32.949981 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 05:23:32.955323 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 05:23:33.026431 kernel: SCSI subsystem initialized Jul 11 05:23:33.035427 kernel: Loading iSCSI transport class v2.0-870. Jul 11 05:23:33.045418 kernel: iscsi: registered transport (tcp) Jul 11 05:23:33.066418 kernel: iscsi: registered transport (qla4xxx) Jul 11 05:23:33.066453 kernel: QLogic iSCSI HBA Driver Jul 11 05:23:33.084015 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 05:23:33.109896 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 05:23:33.111515 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 05:23:33.156349 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 05:23:33.158939 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 05:23:33.213416 kernel: raid6: avx2x4 gen() 29756 MB/s Jul 11 05:23:33.230412 kernel: raid6: avx2x2 gen() 30949 MB/s Jul 11 05:23:33.247447 kernel: raid6: avx2x1 gen() 25869 MB/s Jul 11 05:23:33.247467 kernel: raid6: using algorithm avx2x2 gen() 30949 MB/s Jul 11 05:23:33.265471 kernel: raid6: .... xor() 19870 MB/s, rmw enabled Jul 11 05:23:33.265503 kernel: raid6: using avx2x2 recovery algorithm Jul 11 05:23:33.285423 kernel: xor: automatically using best checksumming function avx Jul 11 05:23:33.446433 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 05:23:33.454106 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 05:23:33.455985 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 05:23:33.489974 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 11 05:23:33.495564 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 05:23:33.496484 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 05:23:33.522783 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jul 11 05:23:33.549201 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 05:23:33.552779 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 05:23:33.631924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 05:23:33.635504 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 05:23:33.687482 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 11 05:23:33.687538 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 05:23:33.691435 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 05:23:33.705140 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 05:23:33.709175 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 05:23:33.709241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:23:33.717879 kernel: libata version 3.00 loaded. Jul 11 05:23:33.717908 kernel: AES CTR mode by8 optimization enabled Jul 11 05:23:33.717922 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 05:23:33.717943 kernel: GPT:9289727 != 19775487 Jul 11 05:23:33.717956 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 05:23:33.717969 kernel: GPT:9289727 != 19775487 Jul 11 05:23:33.717982 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 05:23:33.717995 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:23:33.717260 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:23:33.724652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:23:33.733425 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 05:23:33.733653 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 05:23:33.736850 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 11 05:23:33.737092 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 11 05:23:33.737259 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 05:23:33.740445 kernel: scsi host0: ahci Jul 11 05:23:33.740658 kernel: scsi host1: ahci Jul 11 05:23:33.741915 kernel: scsi host2: ahci Jul 11 05:23:33.743107 kernel: scsi host3: ahci Jul 11 05:23:33.744427 kernel: scsi host4: ahci Jul 11 05:23:33.744647 kernel: scsi host5: ahci Jul 11 05:23:33.749548 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 11 05:23:33.749596 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 11 05:23:33.749611 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 11 05:23:33.749625 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 11 05:23:33.749638 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 11 05:23:33.749651 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 11 05:23:33.778598 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 05:23:33.804839 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 05:23:33.805122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:23:33.817834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 05:23:33.831165 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 05:23:33.831231 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 05:23:33.835430 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 05:23:33.864436 disk-uuid[633]: Primary Header is updated. Jul 11 05:23:33.864436 disk-uuid[633]: Secondary Entries is updated. Jul 11 05:23:33.864436 disk-uuid[633]: Secondary Header is updated. Jul 11 05:23:33.868152 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:23:33.871423 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:23:34.054433 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 05:23:34.054519 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 05:23:34.055418 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 05:23:34.055443 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 05:23:34.057068 kernel: ata3.00: applying bridge limits Jul 11 05:23:34.057090 kernel: ata3.00: configured for UDMA/100 Jul 11 05:23:34.063417 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 05:23:34.063443 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 05:23:34.064413 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 05:23:34.066427 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 05:23:34.116458 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 05:23:34.116797 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 05:23:34.142457 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 05:23:34.533287 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 05:23:34.535102 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 05:23:34.536921 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 05:23:34.536986 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 05:23:34.538181 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 05:23:34.565199 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 05:23:34.872459 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:23:34.873516 disk-uuid[634]: The operation has completed successfully. Jul 11 05:23:34.903378 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 05:23:34.903526 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 05:23:34.945442 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 05:23:34.973609 sh[662]: Success Jul 11 05:23:34.991730 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 05:23:34.991761 kernel: device-mapper: uevent: version 1.0.3 Jul 11 05:23:34.992805 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 11 05:23:35.002424 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 11 05:23:35.033215 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 05:23:35.036284 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 05:23:35.055700 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 05:23:35.062423 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 11 05:23:35.062487 kernel: BTRFS: device fsid 5947ac9d-360e-47c3-9a17-c6b228910c06 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (674) Jul 11 05:23:35.065442 kernel: BTRFS info (device dm-0): first mount of filesystem 5947ac9d-360e-47c3-9a17-c6b228910c06 Jul 11 05:23:35.065472 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 05:23:35.067414 kernel: BTRFS info (device dm-0): using free-space-tree Jul 11 05:23:35.072412 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 05:23:35.073119 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 11 05:23:35.075379 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 05:23:35.076463 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 05:23:35.079310 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 05:23:35.113435 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Jul 11 05:23:35.113504 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:23:35.113532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 05:23:35.114898 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 05:23:35.122459 kernel: BTRFS info (device vda6): last unmount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:23:35.123780 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 05:23:35.125039 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 05:23:35.211510 ignition[754]: Ignition 2.21.0 Jul 11 05:23:35.211866 ignition[754]: Stage: fetch-offline Jul 11 05:23:35.211912 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jul 11 05:23:35.211923 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:23:35.212246 ignition[754]: parsed url from cmdline: "" Jul 11 05:23:35.212251 ignition[754]: no config URL provided Jul 11 05:23:35.212259 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 05:23:35.216235 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 05:23:35.212270 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jul 11 05:23:35.220668 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 05:23:35.212301 ignition[754]: op(1): [started] loading QEMU firmware config module Jul 11 05:23:35.212308 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 05:23:35.224970 ignition[754]: op(1): [finished] loading QEMU firmware config module Jul 11 05:23:35.267419 ignition[754]: parsing config with SHA512: f2eb662e35c6402c454a6c3380b5cad6e065b89754c64d67e143b482d6994d327f32aa515d12db95b64e0f19b1cbd2cc657961c3daf71af0149ce9d3a73ff130 Jul 11 05:23:35.270999 unknown[754]: fetched base config from "system" Jul 11 05:23:35.271783 unknown[754]: fetched user config from "qemu" Jul 11 05:23:35.272215 ignition[754]: fetch-offline: fetch-offline passed Jul 11 05:23:35.274863 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 05:23:35.272301 ignition[754]: Ignition finished successfully Jul 11 05:23:35.280372 systemd-networkd[851]: lo: Link UP Jul 11 05:23:35.280384 systemd-networkd[851]: lo: Gained carrier Jul 11 05:23:35.281905 systemd-networkd[851]: Enumeration completed Jul 11 05:23:35.282112 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 05:23:35.282280 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:23:35.282284 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 05:23:35.282791 systemd-networkd[851]: eth0: Link UP Jul 11 05:23:35.282796 systemd-networkd[851]: eth0: Gained carrier Jul 11 05:23:35.282806 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:23:35.284429 systemd[1]: Reached target network.target - Network. Jul 11 05:23:35.284839 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 05:23:35.290623 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 05:23:35.316469 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 05:23:35.336205 ignition[855]: Ignition 2.21.0 Jul 11 05:23:35.336220 ignition[855]: Stage: kargs Jul 11 05:23:35.336423 ignition[855]: no configs at "/usr/lib/ignition/base.d" Jul 11 05:23:35.336436 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:23:35.338087 ignition[855]: kargs: kargs passed Jul 11 05:23:35.338137 ignition[855]: Ignition finished successfully Jul 11 05:23:35.345463 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 05:23:35.347676 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 05:23:35.387094 ignition[864]: Ignition 2.21.0 Jul 11 05:23:35.387105 ignition[864]: Stage: disks Jul 11 05:23:35.387252 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jul 11 05:23:35.387262 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:23:35.388079 ignition[864]: disks: disks passed Jul 11 05:23:35.388128 ignition[864]: Ignition finished successfully Jul 11 05:23:35.392008 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 05:23:35.393467 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 05:23:35.395378 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 05:23:35.397755 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 05:23:35.399753 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 05:23:35.401572 systemd[1]: Reached target basic.target - Basic System. Jul 11 05:23:35.404287 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 05:23:35.439961 systemd-resolved[258]: Detected conflict on linux IN A 10.0.0.94 Jul 11 05:23:35.439972 systemd-resolved[258]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jul 11 05:23:35.443694 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 11 05:23:35.451219 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 05:23:35.452190 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 05:23:35.560431 kernel: EXT4-fs (vda9): mounted filesystem 68e263c6-913a-4fa8-894f-6e89b186e148 r/w with ordered data mode. Quota mode: none. Jul 11 05:23:35.560600 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 05:23:35.562025 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 05:23:35.564675 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 05:23:35.566228 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 05:23:35.567515 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 05:23:35.567581 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 05:23:35.567613 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 05:23:35.583772 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 05:23:35.585210 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 05:23:35.590414 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Jul 11 05:23:35.593981 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:23:35.594004 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 05:23:35.594015 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 05:23:35.598934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 05:23:35.624684 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 05:23:35.630501 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Jul 11 05:23:35.636079 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 05:23:35.641086 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 05:23:35.731077 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 05:23:35.733459 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 05:23:35.736086 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 05:23:35.752407 kernel: BTRFS info (device vda6): last unmount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:23:35.765511 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 05:23:35.780904 ignition[997]: INFO : Ignition 2.21.0 Jul 11 05:23:35.780904 ignition[997]: INFO : Stage: mount Jul 11 05:23:35.782867 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 05:23:35.782867 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:23:35.785429 ignition[997]: INFO : mount: mount passed Jul 11 05:23:35.785429 ignition[997]: INFO : Ignition finished successfully Jul 11 05:23:35.786328 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 05:23:35.789377 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 05:23:36.063433 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 05:23:36.065639 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 05:23:36.087987 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Jul 11 05:23:36.088042 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:23:36.088064 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 05:23:36.089472 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 05:23:36.093684 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 05:23:36.134797 ignition[1026]: INFO : Ignition 2.21.0 Jul 11 05:23:36.134797 ignition[1026]: INFO : Stage: files Jul 11 05:23:36.136772 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 05:23:36.136772 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:23:36.139054 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Jul 11 05:23:36.140343 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 05:23:36.140343 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 05:23:36.143491 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 05:23:36.145026 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 05:23:36.145026 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 05:23:36.144268 unknown[1026]: wrote ssh authorized keys file for user: core Jul 11 05:23:36.149288 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 05:23:36.149288 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 11 05:23:36.192174 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 05:23:36.430744 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 05:23:36.430744 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 05:23:36.434679 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 11 05:23:36.787104 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 05:23:36.807668 systemd-networkd[851]: eth0: Gained IPv6LL Jul 11 05:23:36.885224 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 05:23:36.885224 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 11 05:23:36.889013 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 05:23:36.889013 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 05:23:36.889013 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 05:23:36.889013 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 05:23:36.889013 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 05:23:36.889013 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 05:23:36.889013 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 05:23:36.998596 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 05:23:37.000671 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 05:23:37.000671 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 05:23:37.062897 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 05:23:37.062897 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 05:23:37.067650 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 11 05:23:37.436121 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 11 05:23:37.942145 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 05:23:37.942145 ignition[1026]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 11 05:23:37.946029 ignition[1026]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 05:23:37.951951 ignition[1026]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 05:23:37.951951 ignition[1026]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 11 05:23:37.951951 ignition[1026]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 11 05:23:37.956816 ignition[1026]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 05:23:37.959101 ignition[1026]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 05:23:37.959101 ignition[1026]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 11 05:23:37.962281 ignition[1026]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 05:23:37.981305 ignition[1026]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 05:23:37.987498 ignition[1026]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 05:23:37.989166 ignition[1026]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 05:23:37.989166 ignition[1026]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 11 05:23:37.989166 ignition[1026]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 05:23:37.989166 ignition[1026]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 05:23:37.989166 ignition[1026]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 05:23:37.989166 ignition[1026]: INFO : files: files passed Jul 11 05:23:37.989166 ignition[1026]: INFO : Ignition finished successfully Jul 11 05:23:37.994693 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 05:23:37.997598 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 05:23:38.000140 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 05:23:38.020438 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 05:23:38.020715 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 05:23:38.024476 initrd-setup-root-after-ignition[1055]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 05:23:38.026946 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 05:23:38.028617 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 05:23:38.030095 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 05:23:38.032953 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 05:23:38.033196 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 05:23:38.037844 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 05:23:38.090902 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 05:23:38.092212 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 05:23:38.095292 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 05:23:38.095402 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 05:23:38.098757 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 05:23:38.099699 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 05:23:38.141286 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 05:23:38.143055 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 05:23:38.166939 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 05:23:38.168297 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 05:23:38.170701 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 05:23:38.173555 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 05:23:38.173762 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 05:23:38.176944 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 05:23:38.177124 systemd[1]: Stopped target basic.target - Basic System. Jul 11 05:23:38.178959 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 05:23:38.180819 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 05:23:38.181172 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 05:23:38.181558 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 11 05:23:38.181901 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 05:23:38.182274 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 05:23:38.182879 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 05:23:38.183217 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 05:23:38.183711 systemd[1]: Stopped target swap.target - Swaps. Jul 11 05:23:38.183989 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 05:23:38.184173 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 05:23:38.202571 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 05:23:38.202766 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 05:23:38.204984 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 05:23:38.208566 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 05:23:38.211507 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 05:23:38.211668 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 05:23:38.215130 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 05:23:38.215243 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 05:23:38.216892 systemd[1]: Stopped target paths.target - Path Units. Jul 11 05:23:38.220099 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 05:23:38.224536 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 05:23:38.226113 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 05:23:38.227824 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 05:23:38.229846 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 05:23:38.229980 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 05:23:38.231784 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 05:23:38.231879 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 05:23:38.233735 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 05:23:38.233857 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 05:23:38.237159 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 05:23:38.237304 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 05:23:38.239250 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 05:23:38.242736 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 05:23:38.246112 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 05:23:38.246292 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 05:23:38.248336 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 05:23:38.248464 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 05:23:38.257123 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 05:23:38.258235 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 05:23:38.272413 ignition[1081]: INFO : Ignition 2.21.0 Jul 11 05:23:38.272413 ignition[1081]: INFO : Stage: umount Jul 11 05:23:38.272413 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 05:23:38.272413 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:23:38.276598 ignition[1081]: INFO : umount: umount passed Jul 11 05:23:38.276598 ignition[1081]: INFO : Ignition finished successfully Jul 11 05:23:38.279820 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 05:23:38.280448 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 05:23:38.280582 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 05:23:38.281010 systemd[1]: Stopped target network.target - Network. Jul 11 05:23:38.282627 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 05:23:38.282677 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 05:23:38.285333 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 05:23:38.285430 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 05:23:38.286142 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 05:23:38.286196 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 05:23:38.286772 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 05:23:38.286814 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 05:23:38.287303 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 05:23:38.287738 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 05:23:38.296371 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 05:23:38.296579 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 05:23:38.302056 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 11 05:23:38.302309 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 05:23:38.302442 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 05:23:38.307144 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 11 05:23:38.307873 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 11 05:23:38.309436 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 05:23:38.309490 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 05:23:38.310584 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 05:23:38.312712 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 05:23:38.312769 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 05:23:38.313087 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 05:23:38.313126 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:23:38.318434 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 05:23:38.318492 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 05:23:38.319050 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 05:23:38.319090 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 05:23:38.324645 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 05:23:38.327907 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 05:23:38.327993 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 11 05:23:38.338071 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 05:23:38.339143 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 05:23:38.342170 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 05:23:38.342359 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 05:23:38.344624 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 05:23:38.344669 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 05:23:38.345697 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 05:23:38.345736 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 05:23:38.345993 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 05:23:38.346038 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 05:23:38.346814 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 05:23:38.346861 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 05:23:38.347644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 05:23:38.347696 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 05:23:38.349112 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 05:23:38.357451 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 11 05:23:38.357521 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 05:23:38.361682 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 05:23:38.361735 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 05:23:38.365006 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 11 05:23:38.365053 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 05:23:38.369879 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 05:23:38.369942 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 05:23:38.371185 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 05:23:38.371241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:23:38.377496 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 11 05:23:38.377564 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 11 05:23:38.377614 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 11 05:23:38.377661 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 05:23:38.378027 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 05:23:38.378134 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 05:23:38.455385 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 05:23:38.455559 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 05:23:38.457237 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 05:23:38.458637 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 05:23:38.458713 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 05:23:38.461678 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 05:23:38.499302 systemd[1]: Switching root. Jul 11 05:23:38.540737 systemd-journald[220]: Journal stopped Jul 11 05:23:40.459696 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 11 05:23:40.459783 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 05:23:40.459802 kernel: SELinux: policy capability open_perms=1 Jul 11 05:23:40.459824 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 05:23:40.459839 kernel: SELinux: policy capability always_check_network=0 Jul 11 05:23:40.459853 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 05:23:40.459868 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 05:23:40.459887 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 05:23:40.459908 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 05:23:40.459930 kernel: SELinux: policy capability userspace_initial_context=0 Jul 11 05:23:40.459945 kernel: audit: type=1403 audit(1752211418.950:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 05:23:40.459961 systemd[1]: Successfully loaded SELinux policy in 64.170ms. Jul 11 05:23:40.459981 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.946ms. Jul 11 05:23:40.460005 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 05:23:40.460021 systemd[1]: Detected virtualization kvm. Jul 11 05:23:40.460039 systemd[1]: Detected architecture x86-64. Jul 11 05:23:40.460058 systemd[1]: Detected first boot. Jul 11 05:23:40.460081 systemd[1]: Initializing machine ID from VM UUID. Jul 11 05:23:40.460097 zram_generator::config[1126]: No configuration found. Jul 11 05:23:40.460114 kernel: Guest personality initialized and is inactive Jul 11 05:23:40.460129 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 11 05:23:40.460144 kernel: Initialized host personality Jul 11 05:23:40.460159 kernel: NET: Registered PF_VSOCK protocol family Jul 11 05:23:40.460174 systemd[1]: Populated /etc with preset unit settings. Jul 11 05:23:40.460195 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 11 05:23:40.460210 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 05:23:40.460226 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 05:23:40.460241 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 05:23:40.460256 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 05:23:40.460271 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 05:23:40.460287 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 05:23:40.460302 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 05:23:40.460318 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 05:23:40.460337 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 05:23:40.460351 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 05:23:40.460366 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 05:23:40.460383 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 05:23:40.460416 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 05:23:40.460441 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 05:23:40.460455 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 05:23:40.460470 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 05:23:40.460490 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 05:23:40.460506 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 05:23:40.460521 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 05:23:40.460537 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 05:23:40.460553 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 05:23:40.460569 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 05:23:40.460585 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 05:23:40.460600 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 05:23:40.460619 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 05:23:40.460638 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 05:23:40.460656 systemd[1]: Reached target slices.target - Slice Units. Jul 11 05:23:40.460675 systemd[1]: Reached target swap.target - Swaps. Jul 11 05:23:40.460692 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 05:23:40.460708 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 05:23:40.460724 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 11 05:23:40.460740 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 05:23:40.460758 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 05:23:40.460774 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 05:23:40.460794 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 05:23:40.460810 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 05:23:40.460829 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 05:23:40.460845 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 05:23:40.460861 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:23:40.460878 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 05:23:40.460894 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 05:23:40.460910 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 05:23:40.460929 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 05:23:40.460945 systemd[1]: Reached target machines.target - Containers. Jul 11 05:23:40.460961 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 05:23:40.460978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:23:40.460993 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 05:23:40.461009 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 05:23:40.461026 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:23:40.461042 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 05:23:40.461057 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:23:40.461076 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 05:23:40.461092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:23:40.461110 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 05:23:40.461126 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 05:23:40.461142 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 05:23:40.461158 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 05:23:40.461175 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 05:23:40.461192 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:23:40.461210 kernel: loop: module loaded Jul 11 05:23:40.461226 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 05:23:40.461241 kernel: fuse: init (API version 7.41) Jul 11 05:23:40.461256 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 05:23:40.461272 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 05:23:40.461288 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 05:23:40.461304 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 11 05:23:40.461320 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 05:23:40.461339 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 05:23:40.461355 systemd[1]: Stopped verity-setup.service. Jul 11 05:23:40.461372 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:23:40.461389 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 05:23:40.461444 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 05:23:40.461461 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 05:23:40.461477 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 05:23:40.461496 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 05:23:40.461511 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 05:23:40.461527 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 05:23:40.461543 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 05:23:40.461588 systemd-journald[1201]: Collecting audit messages is disabled. Jul 11 05:23:40.461618 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 05:23:40.461634 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 05:23:40.461651 systemd-journald[1201]: Journal started Jul 11 05:23:40.461680 systemd-journald[1201]: Runtime Journal (/run/log/journal/b94bd92bcb1d45fe9ffa91a10887827d) is 6M, max 48.6M, 42.5M free. Jul 11 05:23:39.496026 systemd[1]: Queued start job for default target multi-user.target. Jul 11 05:23:39.522828 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 05:23:39.523344 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 05:23:40.465524 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 05:23:40.466765 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:23:40.467075 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:23:40.468296 kernel: ACPI: bus type drm_connector registered Jul 11 05:23:40.468702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:23:40.468970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:23:40.470595 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 05:23:40.470811 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 05:23:40.472187 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 05:23:40.472574 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 05:23:40.474001 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:23:40.474211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:23:40.475821 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 05:23:40.477224 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 05:23:40.478779 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 05:23:40.480701 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 11 05:23:40.499608 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 05:23:40.502716 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 05:23:40.505098 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 05:23:40.506321 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 05:23:40.506357 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 05:23:40.508617 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 11 05:23:40.525171 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 05:23:40.526917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:23:40.528671 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 05:23:40.531875 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 05:23:40.533879 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 05:23:40.536501 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 05:23:40.537717 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 05:23:40.539238 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 05:23:40.543604 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 05:23:40.548633 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 05:23:40.552184 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 05:23:40.555215 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 05:23:40.556000 systemd-journald[1201]: Time spent on flushing to /var/log/journal/b94bd92bcb1d45fe9ffa91a10887827d is 47.635ms for 987 entries. Jul 11 05:23:40.556000 systemd-journald[1201]: System Journal (/var/log/journal/b94bd92bcb1d45fe9ffa91a10887827d) is 8M, max 195.6M, 187.6M free. Jul 11 05:23:40.615846 systemd-journald[1201]: Received client request to flush runtime journal. Jul 11 05:23:40.615889 kernel: loop0: detected capacity change from 0 to 114000 Jul 11 05:23:40.558841 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 05:23:40.574405 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 05:23:40.576430 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 05:23:40.579562 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 11 05:23:40.592620 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jul 11 05:23:40.592632 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jul 11 05:23:40.605145 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 05:23:40.624451 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 05:23:40.632762 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 05:23:40.708812 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:23:40.712676 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 05:23:40.732474 kernel: loop1: detected capacity change from 0 to 146488 Jul 11 05:23:40.735152 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 05:23:40.737428 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 11 05:23:40.755924 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 05:23:40.758997 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 05:23:40.772725 kernel: loop2: detected capacity change from 0 to 221472 Jul 11 05:23:40.789721 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jul 11 05:23:40.789745 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jul 11 05:23:40.796148 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 05:23:40.809442 kernel: loop3: detected capacity change from 0 to 114000 Jul 11 05:23:40.820473 kernel: loop4: detected capacity change from 0 to 146488 Jul 11 05:23:40.833487 kernel: loop5: detected capacity change from 0 to 221472 Jul 11 05:23:40.847200 (sd-merge)[1271]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 05:23:40.847827 (sd-merge)[1271]: Merged extensions into '/usr'. Jul 11 05:23:40.851927 systemd[1]: Reload requested from client PID 1245 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 05:23:40.851944 systemd[1]: Reloading... Jul 11 05:23:40.930446 zram_generator::config[1293]: No configuration found. Jul 11 05:23:41.023947 ldconfig[1240]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 05:23:41.050713 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:23:41.141273 systemd[1]: Reloading finished in 288 ms. Jul 11 05:23:41.174914 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 05:23:41.176563 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 05:23:41.178066 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 05:23:41.197101 systemd[1]: Starting ensure-sysext.service... Jul 11 05:23:41.199077 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 05:23:41.201417 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 05:23:41.213595 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Jul 11 05:23:41.213610 systemd[1]: Reloading... Jul 11 05:23:41.227688 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 11 05:23:41.227738 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 11 05:23:41.228085 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 05:23:41.228388 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 05:23:41.229294 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 05:23:41.229630 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 11 05:23:41.229724 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 11 05:23:41.234000 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 05:23:41.234014 systemd-tmpfiles[1336]: Skipping /boot Jul 11 05:23:41.238188 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Jul 11 05:23:41.244970 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 05:23:41.244985 systemd-tmpfiles[1336]: Skipping /boot Jul 11 05:23:41.290443 zram_generator::config[1371]: No configuration found. Jul 11 05:23:41.412441 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 05:23:41.421435 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 11 05:23:41.427433 kernel: ACPI: button: Power Button [PWRF] Jul 11 05:23:41.457418 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 05:23:41.457690 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 05:23:41.444347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:23:41.563879 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 05:23:41.564546 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 05:23:41.677962 systemd[1]: Reloading finished in 464 ms. Jul 11 05:23:41.731075 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 05:23:41.747323 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 05:23:41.761854 kernel: kvm_amd: TSC scaling supported Jul 11 05:23:41.761947 kernel: kvm_amd: Nested Virtualization enabled Jul 11 05:23:41.761967 kernel: kvm_amd: Nested Paging enabled Jul 11 05:23:41.762863 kernel: kvm_amd: LBR virtualization supported Jul 11 05:23:41.762905 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 05:23:41.765088 kernel: kvm_amd: Virtual GIF supported Jul 11 05:23:41.789448 kernel: EDAC MC: Ver: 3.0.0 Jul 11 05:23:41.801822 systemd[1]: Finished ensure-sysext.service. Jul 11 05:23:41.805625 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:23:41.806891 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 05:23:41.809756 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 05:23:41.811243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:23:41.812615 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:23:41.824544 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 05:23:41.827506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:23:41.830212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:23:41.831848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:23:41.833601 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 05:23:41.837484 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:23:41.839015 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 05:23:41.844303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 05:23:41.855499 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 05:23:41.859101 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 05:23:41.865749 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 05:23:41.867137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:23:41.867203 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:23:41.868505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:23:41.868788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:23:41.869213 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 05:23:41.873841 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 05:23:41.875655 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:23:41.875867 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:23:41.877849 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:23:41.878120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:23:41.882260 augenrules[1490]: No rules Jul 11 05:23:41.883448 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 05:23:41.883818 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 05:23:41.885928 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 05:23:41.893036 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 05:23:41.904429 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 05:23:41.905318 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 05:23:41.905439 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 05:23:41.907057 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 05:23:41.908804 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 05:23:41.916775 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 05:23:41.917125 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 05:23:41.925993 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 05:23:41.962497 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 05:23:41.978929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:23:42.036118 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 05:23:42.037823 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 05:23:42.048187 systemd-networkd[1473]: lo: Link UP Jul 11 05:23:42.048196 systemd-networkd[1473]: lo: Gained carrier Jul 11 05:23:42.050177 systemd-networkd[1473]: Enumeration completed Jul 11 05:23:42.050274 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 05:23:42.053096 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:23:42.053105 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 05:23:42.053436 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 11 05:23:42.054057 systemd-networkd[1473]: eth0: Link UP Jul 11 05:23:42.054271 systemd-networkd[1473]: eth0: Gained carrier Jul 11 05:23:42.054291 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:23:42.056305 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 05:23:42.058199 systemd-resolved[1481]: Positive Trust Anchors: Jul 11 05:23:42.058220 systemd-resolved[1481]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 05:23:42.058251 systemd-resolved[1481]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 05:23:42.062040 systemd-resolved[1481]: Defaulting to hostname 'linux'. Jul 11 05:23:42.063695 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 05:23:42.065101 systemd[1]: Reached target network.target - Network. Jul 11 05:23:42.066193 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 05:23:42.067578 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 05:23:42.068946 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 05:23:42.069447 systemd-networkd[1473]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 05:23:42.070035 systemd-timesyncd[1482]: Network configuration changed, trying to establish connection. Jul 11 05:23:42.070331 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 05:23:42.071665 systemd-timesyncd[1482]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 05:23:42.071714 systemd-timesyncd[1482]: Initial clock synchronization to Fri 2025-07-11 05:23:42.054725 UTC. Jul 11 05:23:42.071992 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 11 05:23:42.073500 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 05:23:42.074866 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 05:23:42.076454 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 05:23:42.077855 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 05:23:42.077898 systemd[1]: Reached target paths.target - Path Units. Jul 11 05:23:42.078941 systemd[1]: Reached target timers.target - Timer Units. Jul 11 05:23:42.080995 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 05:23:42.084479 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 05:23:42.088416 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 11 05:23:42.090001 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 11 05:23:42.091404 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 11 05:23:42.097667 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 05:23:42.099307 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 11 05:23:42.102012 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 11 05:23:42.103677 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 05:23:42.106881 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 05:23:42.108003 systemd[1]: Reached target basic.target - Basic System. Jul 11 05:23:42.109175 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 05:23:42.109212 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 05:23:42.110723 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 05:23:42.113911 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 05:23:42.116141 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 05:23:42.123704 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 05:23:42.126752 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 05:23:42.127983 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 05:23:42.129489 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 11 05:23:42.132187 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 05:23:42.133650 jq[1530]: false Jul 11 05:23:42.135186 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 05:23:42.138778 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 05:23:42.144644 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 05:23:42.148444 extend-filesystems[1531]: Found /dev/vda6 Jul 11 05:23:42.154504 extend-filesystems[1531]: Found /dev/vda9 Jul 11 05:23:42.154504 extend-filesystems[1531]: Checking size of /dev/vda9 Jul 11 05:23:42.151438 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 05:23:42.157688 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Refreshing passwd entry cache Jul 11 05:23:42.149522 oslogin_cache_refresh[1532]: Refreshing passwd entry cache Jul 11 05:23:42.155051 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 05:23:42.159751 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Failure getting users, quitting Jul 11 05:23:42.159751 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 05:23:42.159710 oslogin_cache_refresh[1532]: Failure getting users, quitting Jul 11 05:23:42.159879 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Refreshing group entry cache Jul 11 05:23:42.159734 oslogin_cache_refresh[1532]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 05:23:42.159795 oslogin_cache_refresh[1532]: Refreshing group entry cache Jul 11 05:23:42.160217 extend-filesystems[1531]: Resized partition /dev/vda9 Jul 11 05:23:42.162250 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 05:23:42.163823 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 05:23:42.168206 extend-filesystems[1552]: resize2fs 1.47.2 (1-Jan-2025) Jul 11 05:23:42.168488 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 05:23:42.169257 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Failure getting groups, quitting Jul 11 05:23:42.169257 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 05:23:42.168905 oslogin_cache_refresh[1532]: Failure getting groups, quitting Jul 11 05:23:42.168918 oslogin_cache_refresh[1532]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 05:23:42.174632 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 05:23:42.175476 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 05:23:42.175557 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 05:23:42.175805 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 05:23:42.176129 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 11 05:23:42.176365 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 11 05:23:42.179029 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 05:23:42.179296 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 05:23:42.185989 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 05:23:42.186239 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 05:23:42.206222 update_engine[1553]: I20250711 05:23:42.206135 1553 main.cc:92] Flatcar Update Engine starting Jul 11 05:23:42.209701 jq[1554]: true Jul 11 05:23:42.242824 tar[1557]: linux-amd64/helm Jul 11 05:23:42.216823 (ntainerd)[1570]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 05:23:42.251982 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 05:23:42.270897 extend-filesystems[1552]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 05:23:42.270897 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 05:23:42.270897 extend-filesystems[1552]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 05:23:42.276136 extend-filesystems[1531]: Resized filesystem in /dev/vda9 Jul 11 05:23:42.274695 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 05:23:42.296780 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 05:23:42.303050 jq[1572]: true Jul 11 05:23:42.306537 dbus-daemon[1528]: [system] SELinux support is enabled Jul 11 05:23:42.306683 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 05:23:42.312617 update_engine[1553]: I20250711 05:23:42.312546 1553 update_check_scheduler.cc:74] Next update check in 8m19s Jul 11 05:23:42.313053 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 05:23:42.313086 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 05:23:42.314568 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 05:23:42.314592 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 05:23:42.318216 systemd-logind[1546]: Watching system buttons on /dev/input/event2 (Power Button) Jul 11 05:23:42.318523 systemd-logind[1546]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 05:23:42.318771 systemd[1]: Started update-engine.service - Update Engine. Jul 11 05:23:42.319647 systemd-logind[1546]: New seat seat0. Jul 11 05:23:42.322501 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 05:23:42.323978 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 05:23:42.359022 bash[1592]: Updated "/home/core/.ssh/authorized_keys" Jul 11 05:23:42.360917 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 05:23:42.363117 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 05:23:42.425298 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 05:23:42.745338 sshd_keygen[1569]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 05:23:42.790906 tar[1557]: linux-amd64/LICENSE Jul 11 05:23:42.790906 tar[1557]: linux-amd64/README.md Jul 11 05:23:42.792212 containerd[1570]: time="2025-07-11T05:23:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 11 05:23:42.794916 containerd[1570]: time="2025-07-11T05:23:42.794876819Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 11 05:23:42.810850 containerd[1570]: time="2025-07-11T05:23:42.810779812Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="21.4µs" Jul 11 05:23:42.810850 containerd[1570]: time="2025-07-11T05:23:42.810826580Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 11 05:23:42.810850 containerd[1570]: time="2025-07-11T05:23:42.810848702Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 11 05:23:42.811088 containerd[1570]: time="2025-07-11T05:23:42.811057493Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 11 05:23:42.811088 containerd[1570]: time="2025-07-11T05:23:42.811078052Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 11 05:23:42.811130 containerd[1570]: time="2025-07-11T05:23:42.811110453Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 05:23:42.811207 containerd[1570]: time="2025-07-11T05:23:42.811177017Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 05:23:42.811207 containerd[1570]: time="2025-07-11T05:23:42.811194510Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 05:23:42.811602 containerd[1570]: time="2025-07-11T05:23:42.811511875Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 05:23:42.811602 containerd[1570]: time="2025-07-11T05:23:42.811525942Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 05:23:42.811602 containerd[1570]: time="2025-07-11T05:23:42.811537032Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 05:23:42.811602 containerd[1570]: time="2025-07-11T05:23:42.811545558Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 11 05:23:42.811684 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 05:23:42.813609 containerd[1570]: time="2025-07-11T05:23:42.811634495Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 11 05:23:42.813609 containerd[1570]: time="2025-07-11T05:23:42.811937514Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 05:23:42.813609 containerd[1570]: time="2025-07-11T05:23:42.811970465Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 05:23:42.813609 containerd[1570]: time="2025-07-11T05:23:42.811979452Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 11 05:23:42.813609 containerd[1570]: time="2025-07-11T05:23:42.812011342Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 11 05:23:42.813609 containerd[1570]: time="2025-07-11T05:23:42.812241043Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 11 05:23:42.813609 containerd[1570]: time="2025-07-11T05:23:42.812316204Z" level=info msg="metadata content store policy set" policy=shared Jul 11 05:23:42.815442 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 05:23:42.817217 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 05:23:42.821150 containerd[1570]: time="2025-07-11T05:23:42.821087623Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 11 05:23:42.821233 containerd[1570]: time="2025-07-11T05:23:42.821183273Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 11 05:23:42.821339 containerd[1570]: time="2025-07-11T05:23:42.821309079Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 11 05:23:42.821339 containerd[1570]: time="2025-07-11T05:23:42.821330920Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 11 05:23:42.821451 containerd[1570]: time="2025-07-11T05:23:42.821346128Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 11 05:23:42.821451 containerd[1570]: time="2025-07-11T05:23:42.821359643Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 11 05:23:42.821451 containerd[1570]: time="2025-07-11T05:23:42.821383799Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 11 05:23:42.821451 containerd[1570]: time="2025-07-11T05:23:42.821408545Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 11 05:23:42.821451 containerd[1570]: time="2025-07-11T05:23:42.821419746Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 11 05:23:42.821451 containerd[1570]: time="2025-07-11T05:23:42.821430085Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 11 05:23:42.821451 containerd[1570]: time="2025-07-11T05:23:42.821450003Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 11 05:23:42.821588 containerd[1570]: time="2025-07-11T05:23:42.821467446Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 11 05:23:42.821629 containerd[1570]: time="2025-07-11T05:23:42.821609161Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 11 05:23:42.821650 containerd[1570]: time="2025-07-11T05:23:42.821639839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 11 05:23:42.821670 containerd[1570]: time="2025-07-11T05:23:42.821654266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 11 05:23:42.821670 containerd[1570]: time="2025-07-11T05:23:42.821667992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 11 05:23:42.821713 containerd[1570]: time="2025-07-11T05:23:42.821679684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 11 05:23:42.821713 containerd[1570]: time="2025-07-11T05:23:42.821692147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 11 05:23:42.821713 containerd[1570]: time="2025-07-11T05:23:42.821705071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 11 05:23:42.821772 containerd[1570]: time="2025-07-11T05:23:42.821717064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 11 05:23:42.821772 containerd[1570]: time="2025-07-11T05:23:42.821733454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 11 05:23:42.821772 containerd[1570]: time="2025-07-11T05:23:42.821743603Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 11 05:23:42.821772 containerd[1570]: time="2025-07-11T05:23:42.821753041Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 11 05:23:42.821894 containerd[1570]: time="2025-07-11T05:23:42.821866634Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 11 05:23:42.821894 containerd[1570]: time="2025-07-11T05:23:42.821886511Z" level=info msg="Start snapshots syncer" Jul 11 05:23:42.821950 containerd[1570]: time="2025-07-11T05:23:42.821930654Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 11 05:23:42.822355 containerd[1570]: time="2025-07-11T05:23:42.822306479Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 11 05:23:42.822479 containerd[1570]: time="2025-07-11T05:23:42.822375488Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 11 05:23:42.822577 containerd[1570]: time="2025-07-11T05:23:42.822485745Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 11 05:23:42.822683 containerd[1570]: time="2025-07-11T05:23:42.822654241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 11 05:23:42.822683 containerd[1570]: time="2025-07-11T05:23:42.822679659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 11 05:23:42.822734 containerd[1570]: time="2025-07-11T05:23:42.822690169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 11 05:23:42.822734 containerd[1570]: time="2025-07-11T05:23:42.822700909Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 11 05:23:42.822734 containerd[1570]: time="2025-07-11T05:23:42.822732548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 11 05:23:42.822793 containerd[1570]: time="2025-07-11T05:23:42.822753557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 11 05:23:42.822793 containerd[1570]: time="2025-07-11T05:23:42.822766903Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 11 05:23:42.822830 containerd[1570]: time="2025-07-11T05:23:42.822793302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 11 05:23:42.822830 containerd[1570]: time="2025-07-11T05:23:42.822803972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 11 05:23:42.822830 containerd[1570]: time="2025-07-11T05:23:42.822813941Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 11 05:23:42.822889 containerd[1570]: time="2025-07-11T05:23:42.822865698Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 05:23:42.822889 containerd[1570]: time="2025-07-11T05:23:42.822881207Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 05:23:42.822928 containerd[1570]: time="2025-07-11T05:23:42.822889593Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 05:23:42.822928 containerd[1570]: time="2025-07-11T05:23:42.822899191Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 05:23:42.823027 containerd[1570]: time="2025-07-11T05:23:42.823002745Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 11 05:23:42.823027 containerd[1570]: time="2025-07-11T05:23:42.823018464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 11 05:23:42.823079 containerd[1570]: time="2025-07-11T05:23:42.823028453Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 11 05:23:42.823079 containerd[1570]: time="2025-07-11T05:23:42.823050304Z" level=info msg="runtime interface created" Jul 11 05:23:42.823079 containerd[1570]: time="2025-07-11T05:23:42.823055624Z" level=info msg="created NRI interface" Jul 11 05:23:42.823079 containerd[1570]: time="2025-07-11T05:23:42.823063148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 11 05:23:42.823079 containerd[1570]: time="2025-07-11T05:23:42.823073387Z" level=info msg="Connect containerd service" Jul 11 05:23:42.823167 containerd[1570]: time="2025-07-11T05:23:42.823114554Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 05:23:42.824283 containerd[1570]: time="2025-07-11T05:23:42.824251787Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 05:23:42.886059 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 05:23:42.886387 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 05:23:42.889476 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 05:23:42.918214 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 05:23:42.922630 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 05:23:42.926312 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 05:23:42.927782 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 05:23:43.064817 containerd[1570]: time="2025-07-11T05:23:43.064683307Z" level=info msg="Start subscribing containerd event" Jul 11 05:23:43.064938 containerd[1570]: time="2025-07-11T05:23:43.064793856Z" level=info msg="Start recovering state" Jul 11 05:23:43.065068 containerd[1570]: time="2025-07-11T05:23:43.065051484Z" level=info msg="Start event monitor" Jul 11 05:23:43.065106 containerd[1570]: time="2025-07-11T05:23:43.065082976Z" level=info msg="Start cni network conf syncer for default" Jul 11 05:23:43.065106 containerd[1570]: time="2025-07-11T05:23:43.065092068Z" level=info msg="Start streaming server" Jul 11 05:23:43.065154 containerd[1570]: time="2025-07-11T05:23:43.065111043Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 11 05:23:43.065154 containerd[1570]: time="2025-07-11T05:23:43.065119856Z" level=info msg="runtime interface starting up..." Jul 11 05:23:43.065154 containerd[1570]: time="2025-07-11T05:23:43.065127476Z" level=info msg="starting plugins..." Jul 11 05:23:43.065154 containerd[1570]: time="2025-07-11T05:23:43.065147823Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 11 05:23:43.067076 containerd[1570]: time="2025-07-11T05:23:43.067037602Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 05:23:43.067140 containerd[1570]: time="2025-07-11T05:23:43.067122817Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 05:23:43.067420 containerd[1570]: time="2025-07-11T05:23:43.067205058Z" level=info msg="containerd successfully booted in 0.276000s" Jul 11 05:23:43.067296 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 05:23:43.399583 systemd-networkd[1473]: eth0: Gained IPv6LL Jul 11 05:23:43.402750 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 05:23:43.404584 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 05:23:43.406981 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 05:23:43.409325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:23:43.411652 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 05:23:43.444629 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 05:23:43.446787 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 05:23:43.447078 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 05:23:43.449581 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 05:23:44.977361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:23:44.979086 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 05:23:44.980442 systemd[1]: Startup finished in 2.791s (kernel) + 6.324s (initrd) + 6.091s (userspace) = 15.207s. Jul 11 05:23:45.017968 (kubelet)[1662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 05:23:45.629266 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 05:23:45.630505 systemd[1]: Started sshd@0-10.0.0.94:22-10.0.0.1:45278.service - OpenSSH per-connection server daemon (10.0.0.1:45278). Jul 11 05:23:45.686294 kubelet[1662]: E0711 05:23:45.686226 1662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 05:23:45.689766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 05:23:45.689959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 05:23:45.690316 systemd[1]: kubelet.service: Consumed 2.063s CPU time, 265.3M memory peak. Jul 11 05:23:45.698738 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 45278 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:45.700767 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:45.707097 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 05:23:45.708189 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 05:23:45.714077 systemd-logind[1546]: New session 1 of user core. Jul 11 05:23:45.735328 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 05:23:45.738332 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 05:23:45.754651 (systemd)[1680]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 05:23:45.757173 systemd-logind[1546]: New session c1 of user core. Jul 11 05:23:45.908230 systemd[1680]: Queued start job for default target default.target. Jul 11 05:23:45.926582 systemd[1680]: Created slice app.slice - User Application Slice. Jul 11 05:23:45.926605 systemd[1680]: Reached target paths.target - Paths. Jul 11 05:23:45.926643 systemd[1680]: Reached target timers.target - Timers. Jul 11 05:23:45.928097 systemd[1680]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 05:23:45.938584 systemd[1680]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 05:23:45.938649 systemd[1680]: Reached target sockets.target - Sockets. Jul 11 05:23:45.938685 systemd[1680]: Reached target basic.target - Basic System. Jul 11 05:23:45.938728 systemd[1680]: Reached target default.target - Main User Target. Jul 11 05:23:45.938757 systemd[1680]: Startup finished in 175ms. Jul 11 05:23:45.939116 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 05:23:45.940684 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 05:23:46.002095 systemd[1]: Started sshd@1-10.0.0.94:22-10.0.0.1:45282.service - OpenSSH per-connection server daemon (10.0.0.1:45282). Jul 11 05:23:46.058135 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 45282 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:46.059469 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:46.063981 systemd-logind[1546]: New session 2 of user core. Jul 11 05:23:46.073544 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 05:23:46.128468 sshd[1694]: Connection closed by 10.0.0.1 port 45282 Jul 11 05:23:46.128922 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Jul 11 05:23:46.138966 systemd[1]: sshd@1-10.0.0.94:22-10.0.0.1:45282.service: Deactivated successfully. Jul 11 05:23:46.140788 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 05:23:46.141483 systemd-logind[1546]: Session 2 logged out. Waiting for processes to exit. Jul 11 05:23:46.144097 systemd[1]: Started sshd@2-10.0.0.94:22-10.0.0.1:45284.service - OpenSSH per-connection server daemon (10.0.0.1:45284). Jul 11 05:23:46.144686 systemd-logind[1546]: Removed session 2. Jul 11 05:23:46.196061 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 45284 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:46.197335 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:46.201368 systemd-logind[1546]: New session 3 of user core. Jul 11 05:23:46.216534 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 05:23:46.265036 sshd[1703]: Connection closed by 10.0.0.1 port 45284 Jul 11 05:23:46.265416 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jul 11 05:23:46.279135 systemd[1]: sshd@2-10.0.0.94:22-10.0.0.1:45284.service: Deactivated successfully. Jul 11 05:23:46.280748 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 05:23:46.281376 systemd-logind[1546]: Session 3 logged out. Waiting for processes to exit. Jul 11 05:23:46.283832 systemd[1]: Started sshd@3-10.0.0.94:22-10.0.0.1:45300.service - OpenSSH per-connection server daemon (10.0.0.1:45300). Jul 11 05:23:46.284362 systemd-logind[1546]: Removed session 3. Jul 11 05:23:46.345575 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 45300 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:46.346973 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:46.351729 systemd-logind[1546]: New session 4 of user core. Jul 11 05:23:46.365550 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 05:23:46.419500 sshd[1713]: Connection closed by 10.0.0.1 port 45300 Jul 11 05:23:46.419897 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jul 11 05:23:46.433734 systemd[1]: sshd@3-10.0.0.94:22-10.0.0.1:45300.service: Deactivated successfully. Jul 11 05:23:46.435820 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 05:23:46.436680 systemd-logind[1546]: Session 4 logged out. Waiting for processes to exit. Jul 11 05:23:46.439566 systemd[1]: Started sshd@4-10.0.0.94:22-10.0.0.1:45306.service - OpenSSH per-connection server daemon (10.0.0.1:45306). Jul 11 05:23:46.440078 systemd-logind[1546]: Removed session 4. Jul 11 05:23:46.495537 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 45306 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:46.497054 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:46.502134 systemd-logind[1546]: New session 5 of user core. Jul 11 05:23:46.515594 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 05:23:46.576265 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 05:23:46.576654 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:23:46.595529 sudo[1723]: pam_unix(sudo:session): session closed for user root Jul 11 05:23:46.597692 sshd[1722]: Connection closed by 10.0.0.1 port 45306 Jul 11 05:23:46.598178 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Jul 11 05:23:46.620175 systemd[1]: sshd@4-10.0.0.94:22-10.0.0.1:45306.service: Deactivated successfully. Jul 11 05:23:46.622468 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 05:23:46.623315 systemd-logind[1546]: Session 5 logged out. Waiting for processes to exit. Jul 11 05:23:46.626667 systemd[1]: Started sshd@5-10.0.0.94:22-10.0.0.1:45318.service - OpenSSH per-connection server daemon (10.0.0.1:45318). Jul 11 05:23:46.627254 systemd-logind[1546]: Removed session 5. Jul 11 05:23:46.682830 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 45318 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:46.684235 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:46.688316 systemd-logind[1546]: New session 6 of user core. Jul 11 05:23:46.701531 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 05:23:46.755156 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 05:23:46.755480 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:23:46.762982 sudo[1734]: pam_unix(sudo:session): session closed for user root Jul 11 05:23:46.769423 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 11 05:23:46.769769 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:23:46.779634 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 05:23:46.837269 augenrules[1756]: No rules Jul 11 05:23:46.839125 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 05:23:46.839488 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 05:23:46.840600 sudo[1733]: pam_unix(sudo:session): session closed for user root Jul 11 05:23:46.842206 sshd[1732]: Connection closed by 10.0.0.1 port 45318 Jul 11 05:23:46.842634 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jul 11 05:23:46.854960 systemd[1]: sshd@5-10.0.0.94:22-10.0.0.1:45318.service: Deactivated successfully. Jul 11 05:23:46.856561 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 05:23:46.857434 systemd-logind[1546]: Session 6 logged out. Waiting for processes to exit. Jul 11 05:23:46.860020 systemd[1]: Started sshd@6-10.0.0.94:22-10.0.0.1:45324.service - OpenSSH per-connection server daemon (10.0.0.1:45324). Jul 11 05:23:46.860833 systemd-logind[1546]: Removed session 6. Jul 11 05:23:46.922367 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 45324 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:23:46.923649 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:23:46.928028 systemd-logind[1546]: New session 7 of user core. Jul 11 05:23:46.937507 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 05:23:46.989717 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 05:23:46.990051 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:23:47.839926 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 05:23:47.853753 (dockerd)[1790]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 05:23:48.239142 dockerd[1790]: time="2025-07-11T05:23:48.239071744Z" level=info msg="Starting up" Jul 11 05:23:48.240015 dockerd[1790]: time="2025-07-11T05:23:48.239986157Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 11 05:23:48.253171 dockerd[1790]: time="2025-07-11T05:23:48.253124316Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 11 05:23:48.705338 dockerd[1790]: time="2025-07-11T05:23:48.705280872Z" level=info msg="Loading containers: start." Jul 11 05:23:48.716410 kernel: Initializing XFRM netlink socket Jul 11 05:23:49.402590 systemd-networkd[1473]: docker0: Link UP Jul 11 05:23:49.474753 dockerd[1790]: time="2025-07-11T05:23:49.474684190Z" level=info msg="Loading containers: done." Jul 11 05:23:49.488150 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2079730225-merged.mount: Deactivated successfully. Jul 11 05:23:49.490241 dockerd[1790]: time="2025-07-11T05:23:49.490188811Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 05:23:49.490325 dockerd[1790]: time="2025-07-11T05:23:49.490287764Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 11 05:23:49.490432 dockerd[1790]: time="2025-07-11T05:23:49.490413264Z" level=info msg="Initializing buildkit" Jul 11 05:23:49.518371 dockerd[1790]: time="2025-07-11T05:23:49.518310934Z" level=info msg="Completed buildkit initialization" Jul 11 05:23:49.525454 dockerd[1790]: time="2025-07-11T05:23:49.525421075Z" level=info msg="Daemon has completed initialization" Jul 11 05:23:49.525554 dockerd[1790]: time="2025-07-11T05:23:49.525490134Z" level=info msg="API listen on /run/docker.sock" Jul 11 05:23:49.525738 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 05:23:50.209680 containerd[1570]: time="2025-07-11T05:23:50.209611749Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 05:23:50.856732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1041037488.mount: Deactivated successfully. Jul 11 05:23:52.911993 containerd[1570]: time="2025-07-11T05:23:52.911928963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:52.912549 containerd[1570]: time="2025-07-11T05:23:52.912515692Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 11 05:23:52.913735 containerd[1570]: time="2025-07-11T05:23:52.913667927Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:52.916090 containerd[1570]: time="2025-07-11T05:23:52.916060056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:52.916909 containerd[1570]: time="2025-07-11T05:23:52.916856215Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.707182475s" Jul 11 05:23:52.916909 containerd[1570]: time="2025-07-11T05:23:52.916906309Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 11 05:23:52.917490 containerd[1570]: time="2025-07-11T05:23:52.917462553Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 05:23:54.062845 containerd[1570]: time="2025-07-11T05:23:54.062781021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:54.063661 containerd[1570]: time="2025-07-11T05:23:54.063595235Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 11 05:23:54.064972 containerd[1570]: time="2025-07-11T05:23:54.064914259Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:54.067478 containerd[1570]: time="2025-07-11T05:23:54.067440629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:54.068438 containerd[1570]: time="2025-07-11T05:23:54.068383638Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.150893806s" Jul 11 05:23:54.068487 containerd[1570]: time="2025-07-11T05:23:54.068442045Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 11 05:23:54.069039 containerd[1570]: time="2025-07-11T05:23:54.068898663Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 05:23:55.848204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 05:23:55.849698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:23:55.887979 containerd[1570]: time="2025-07-11T05:23:55.887935031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:55.888712 containerd[1570]: time="2025-07-11T05:23:55.888688552Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 11 05:23:55.890032 containerd[1570]: time="2025-07-11T05:23:55.890003666Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:55.894091 containerd[1570]: time="2025-07-11T05:23:55.894067616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:55.894831 containerd[1570]: time="2025-07-11T05:23:55.894804163Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.825880062s" Jul 11 05:23:55.894883 containerd[1570]: time="2025-07-11T05:23:55.894833326Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 11 05:23:55.895516 containerd[1570]: time="2025-07-11T05:23:55.895433105Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 05:23:56.120949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:23:56.125800 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 05:23:56.195541 kubelet[2080]: E0711 05:23:56.195490 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 05:23:56.201528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 05:23:56.201720 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 05:23:56.202058 systemd[1]: kubelet.service: Consumed 304ms CPU time, 111.5M memory peak. Jul 11 05:23:57.038649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1026518202.mount: Deactivated successfully. Jul 11 05:23:57.787083 containerd[1570]: time="2025-07-11T05:23:57.787003755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:57.788250 containerd[1570]: time="2025-07-11T05:23:57.788180364Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 11 05:23:57.790087 containerd[1570]: time="2025-07-11T05:23:57.790060736Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:57.793272 containerd[1570]: time="2025-07-11T05:23:57.793208026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:57.793605 containerd[1570]: time="2025-07-11T05:23:57.793567259Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.898106711s" Jul 11 05:23:57.793605 containerd[1570]: time="2025-07-11T05:23:57.793601882Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 11 05:23:57.794238 containerd[1570]: time="2025-07-11T05:23:57.794189265Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 05:23:58.270482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082466145.mount: Deactivated successfully. Jul 11 05:23:58.917990 containerd[1570]: time="2025-07-11T05:23:58.917927898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:58.918700 containerd[1570]: time="2025-07-11T05:23:58.918636465Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 11 05:23:58.920042 containerd[1570]: time="2025-07-11T05:23:58.919997433Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:58.922794 containerd[1570]: time="2025-07-11T05:23:58.922761102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:23:58.923993 containerd[1570]: time="2025-07-11T05:23:58.923946287Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.12970964s" Jul 11 05:23:58.923993 containerd[1570]: time="2025-07-11T05:23:58.923993961Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 05:23:58.924528 containerd[1570]: time="2025-07-11T05:23:58.924491262Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 05:23:59.347695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount705830730.mount: Deactivated successfully. Jul 11 05:23:59.353038 containerd[1570]: time="2025-07-11T05:23:59.352983723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 05:23:59.353670 containerd[1570]: time="2025-07-11T05:23:59.353628698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 05:23:59.354889 containerd[1570]: time="2025-07-11T05:23:59.354846105Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 05:23:59.356933 containerd[1570]: time="2025-07-11T05:23:59.356861837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 05:23:59.357445 containerd[1570]: time="2025-07-11T05:23:59.357381106Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 432.853759ms" Jul 11 05:23:59.357445 containerd[1570]: time="2025-07-11T05:23:59.357438926Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 05:23:59.357985 containerd[1570]: time="2025-07-11T05:23:59.357955421Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 05:23:59.861897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113422645.mount: Deactivated successfully. Jul 11 05:24:01.357267 containerd[1570]: time="2025-07-11T05:24:01.357214596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:24:01.358112 containerd[1570]: time="2025-07-11T05:24:01.358054401Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 11 05:24:01.359354 containerd[1570]: time="2025-07-11T05:24:01.359313719Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:24:01.361831 containerd[1570]: time="2025-07-11T05:24:01.361792911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:24:01.362703 containerd[1570]: time="2025-07-11T05:24:01.362656404Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.004671947s" Jul 11 05:24:01.362703 containerd[1570]: time="2025-07-11T05:24:01.362686722Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 11 05:24:03.646835 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:24:03.647043 systemd[1]: kubelet.service: Consumed 304ms CPU time, 111.5M memory peak. Jul 11 05:24:03.649176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:24:03.671366 systemd[1]: Reload requested from client PID 2237 ('systemctl') (unit session-7.scope)... Jul 11 05:24:03.671379 systemd[1]: Reloading... Jul 11 05:24:03.754481 zram_generator::config[2280]: No configuration found. Jul 11 05:24:03.859458 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:24:03.974994 systemd[1]: Reloading finished in 303 ms. Jul 11 05:24:04.047364 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 05:24:04.047485 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 05:24:04.047798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:24:04.047843 systemd[1]: kubelet.service: Consumed 150ms CPU time, 98.2M memory peak. Jul 11 05:24:04.049474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:24:04.212078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:24:04.215597 (kubelet)[2327]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 05:24:04.249545 kubelet[2327]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:24:04.249545 kubelet[2327]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 05:24:04.249545 kubelet[2327]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:24:04.249974 kubelet[2327]: I0711 05:24:04.249542 2327 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 05:24:04.698142 kubelet[2327]: I0711 05:24:04.698089 2327 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 05:24:04.698142 kubelet[2327]: I0711 05:24:04.698118 2327 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 05:24:04.698363 kubelet[2327]: I0711 05:24:04.698348 2327 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 05:24:04.722150 kubelet[2327]: E0711 05:24:04.722089 2327 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:24:04.724142 kubelet[2327]: I0711 05:24:04.724100 2327 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 05:24:04.732054 kubelet[2327]: I0711 05:24:04.732028 2327 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 05:24:04.737702 kubelet[2327]: I0711 05:24:04.737684 2327 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 05:24:04.738223 kubelet[2327]: I0711 05:24:04.738195 2327 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 05:24:04.738375 kubelet[2327]: I0711 05:24:04.738339 2327 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 05:24:04.738569 kubelet[2327]: I0711 05:24:04.738363 2327 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 05:24:04.738662 kubelet[2327]: I0711 05:24:04.738577 2327 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 05:24:04.738662 kubelet[2327]: I0711 05:24:04.738586 2327 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 05:24:04.738706 kubelet[2327]: I0711 05:24:04.738702 2327 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:24:04.740570 kubelet[2327]: I0711 05:24:04.740528 2327 kubelet.go:408] "Attempting to sync node with API server" Jul 11 05:24:04.740612 kubelet[2327]: I0711 05:24:04.740585 2327 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 05:24:04.740657 kubelet[2327]: I0711 05:24:04.740640 2327 kubelet.go:314] "Adding apiserver pod source" Jul 11 05:24:04.740695 kubelet[2327]: I0711 05:24:04.740680 2327 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 05:24:04.743221 kubelet[2327]: W0711 05:24:04.743094 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 11 05:24:04.743221 kubelet[2327]: E0711 05:24:04.743166 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:24:04.744154 kubelet[2327]: W0711 05:24:04.744116 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 11 05:24:04.744245 kubelet[2327]: E0711 05:24:04.744228 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:24:04.745362 kubelet[2327]: I0711 05:24:04.745341 2327 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 05:24:04.746104 kubelet[2327]: I0711 05:24:04.746081 2327 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 05:24:04.746188 kubelet[2327]: W0711 05:24:04.746156 2327 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 05:24:04.748561 kubelet[2327]: I0711 05:24:04.748301 2327 server.go:1274] "Started kubelet" Jul 11 05:24:04.749269 kubelet[2327]: I0711 05:24:04.748794 2327 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 05:24:04.749269 kubelet[2327]: I0711 05:24:04.748797 2327 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 05:24:04.750063 kubelet[2327]: I0711 05:24:04.749495 2327 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 05:24:04.750063 kubelet[2327]: I0711 05:24:04.749588 2327 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 05:24:04.750063 kubelet[2327]: I0711 05:24:04.749855 2327 server.go:449] "Adding debug handlers to kubelet server" Jul 11 05:24:04.751800 kubelet[2327]: I0711 05:24:04.751775 2327 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 05:24:04.753746 kubelet[2327]: E0711 05:24:04.753647 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:24:04.753746 kubelet[2327]: E0711 05:24:04.753704 2327 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 05:24:04.753746 kubelet[2327]: I0711 05:24:04.753707 2327 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 05:24:04.753746 kubelet[2327]: I0711 05:24:04.753726 2327 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 05:24:04.753893 kubelet[2327]: I0711 05:24:04.753886 2327 reconciler.go:26] "Reconciler: start to sync state" Jul 11 05:24:04.754101 kubelet[2327]: W0711 05:24:04.754063 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 11 05:24:04.754174 kubelet[2327]: E0711 05:24:04.754104 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:24:04.754375 kubelet[2327]: E0711 05:24:04.754289 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="200ms" Jul 11 05:24:04.754375 kubelet[2327]: I0711 05:24:04.754355 2327 factory.go:221] Registration of the systemd container factory successfully Jul 11 05:24:04.754499 kubelet[2327]: I0711 05:24:04.754465 2327 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 05:24:04.755674 kubelet[2327]: E0711 05:24:04.754695 2327 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.94:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.94:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18511b067f985487 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 05:24:04.748276871 +0000 UTC m=+0.529107538,LastTimestamp:2025-07-11 05:24:04.748276871 +0000 UTC m=+0.529107538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 05:24:04.756074 kubelet[2327]: I0711 05:24:04.756037 2327 factory.go:221] Registration of the containerd container factory successfully Jul 11 05:24:04.766838 kubelet[2327]: I0711 05:24:04.766793 2327 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 05:24:04.768107 kubelet[2327]: I0711 05:24:04.768086 2327 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 05:24:04.768164 kubelet[2327]: I0711 05:24:04.768117 2327 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 05:24:04.768164 kubelet[2327]: I0711 05:24:04.768138 2327 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 05:24:04.768215 kubelet[2327]: E0711 05:24:04.768175 2327 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 05:24:04.768795 kubelet[2327]: W0711 05:24:04.768681 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 11 05:24:04.768795 kubelet[2327]: E0711 05:24:04.768723 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:24:04.771093 kubelet[2327]: I0711 05:24:04.771023 2327 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 05:24:04.771093 kubelet[2327]: I0711 05:24:04.771035 2327 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 05:24:04.771093 kubelet[2327]: I0711 05:24:04.771053 2327 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:24:04.854579 kubelet[2327]: E0711 05:24:04.854537 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:24:04.868840 kubelet[2327]: E0711 05:24:04.868778 2327 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 05:24:04.955241 kubelet[2327]: E0711 05:24:04.955140 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:24:04.955603 kubelet[2327]: E0711 05:24:04.955555 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="400ms" Jul 11 05:24:05.055964 kubelet[2327]: E0711 05:24:05.055909 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:24:05.069127 kubelet[2327]: E0711 05:24:05.069084 2327 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 05:24:05.076660 kubelet[2327]: I0711 05:24:05.076632 2327 policy_none.go:49] "None policy: Start" Jul 11 05:24:05.077477 kubelet[2327]: I0711 05:24:05.077453 2327 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 05:24:05.077516 kubelet[2327]: I0711 05:24:05.077494 2327 state_mem.go:35] "Initializing new in-memory state store" Jul 11 05:24:05.085189 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 05:24:05.096244 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 05:24:05.099584 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 05:24:05.117440 kubelet[2327]: I0711 05:24:05.117382 2327 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 05:24:05.117719 kubelet[2327]: I0711 05:24:05.117678 2327 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 05:24:05.117847 kubelet[2327]: I0711 05:24:05.117700 2327 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 05:24:05.118089 kubelet[2327]: I0711 05:24:05.118010 2327 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 05:24:05.119100 kubelet[2327]: E0711 05:24:05.119068 2327 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 05:24:05.221354 kubelet[2327]: I0711 05:24:05.221241 2327 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 05:24:05.221744 kubelet[2327]: E0711 05:24:05.221689 2327 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Jul 11 05:24:05.356488 kubelet[2327]: E0711 05:24:05.356447 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="800ms" Jul 11 05:24:05.424056 kubelet[2327]: I0711 05:24:05.423945 2327 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 05:24:05.424417 kubelet[2327]: E0711 05:24:05.424320 2327 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Jul 11 05:24:05.478435 systemd[1]: Created slice kubepods-burstable-podceb1da8274a4204c91c39526c2a3aff5.slice - libcontainer container kubepods-burstable-podceb1da8274a4204c91c39526c2a3aff5.slice. Jul 11 05:24:05.506865 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 11 05:24:05.536221 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 11 05:24:05.557857 kubelet[2327]: I0711 05:24:05.557789 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:24:05.557857 kubelet[2327]: I0711 05:24:05.557833 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 05:24:05.557857 kubelet[2327]: I0711 05:24:05.557847 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ceb1da8274a4204c91c39526c2a3aff5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ceb1da8274a4204c91c39526c2a3aff5\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:24:05.557857 kubelet[2327]: I0711 05:24:05.557864 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:24:05.558111 kubelet[2327]: I0711 05:24:05.557882 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:24:05.558111 kubelet[2327]: I0711 05:24:05.557897 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:24:05.558111 kubelet[2327]: I0711 05:24:05.557990 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ceb1da8274a4204c91c39526c2a3aff5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ceb1da8274a4204c91c39526c2a3aff5\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:24:05.558111 kubelet[2327]: I0711 05:24:05.558038 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ceb1da8274a4204c91c39526c2a3aff5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ceb1da8274a4204c91c39526c2a3aff5\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:24:05.558111 kubelet[2327]: I0711 05:24:05.558079 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:24:05.804572 kubelet[2327]: E0711 05:24:05.804432 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:05.805221 containerd[1570]: time="2025-07-11T05:24:05.805174704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ceb1da8274a4204c91c39526c2a3aff5,Namespace:kube-system,Attempt:0,}" Jul 11 05:24:05.809352 kubelet[2327]: E0711 05:24:05.809318 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:05.809736 containerd[1570]: time="2025-07-11T05:24:05.809691952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 11 05:24:05.825907 kubelet[2327]: I0711 05:24:05.825884 2327 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 05:24:05.826309 kubelet[2327]: E0711 05:24:05.826272 2327 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Jul 11 05:24:05.839609 kubelet[2327]: E0711 05:24:05.839576 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:05.839992 containerd[1570]: time="2025-07-11T05:24:05.839935614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 11 05:24:05.872329 containerd[1570]: time="2025-07-11T05:24:05.872264390Z" level=info msg="connecting to shim aff03e3a659a78d23264523012c2796d72d86f584f3a7975f5d7c7b671153b0c" address="unix:///run/containerd/s/53610486717d0778a37b5afdbd1311e143f7f9f214fa801ad1ecdcc32f6a1d31" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:24:05.875251 containerd[1570]: time="2025-07-11T05:24:05.875178312Z" level=info msg="connecting to shim 0ed7e5c63438186cead02b581347554d1ff518d785ba5b5b06c85e9273cb1d67" address="unix:///run/containerd/s/ac37cd4a2b8131393aae33039b1d9e0db04b087e9a55a2d365ef877442a271c5" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:24:05.883937 kubelet[2327]: W0711 05:24:05.883880 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 11 05:24:05.884087 kubelet[2327]: E0711 05:24:05.884070 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:24:05.886184 containerd[1570]: time="2025-07-11T05:24:05.885700851Z" level=info msg="connecting to shim aa4c43402aceafa0b04c46de376fddd6dda234318e200cc06f2ff636afc3d2cf" address="unix:///run/containerd/s/ccd51d95f1f0ac9a85579a73ea349e4aa58e5512cac76b1cc1a269bb13e65105" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:24:05.911554 systemd[1]: Started cri-containerd-0ed7e5c63438186cead02b581347554d1ff518d785ba5b5b06c85e9273cb1d67.scope - libcontainer container 0ed7e5c63438186cead02b581347554d1ff518d785ba5b5b06c85e9273cb1d67. Jul 11 05:24:05.916475 systemd[1]: Started cri-containerd-aa4c43402aceafa0b04c46de376fddd6dda234318e200cc06f2ff636afc3d2cf.scope - libcontainer container aa4c43402aceafa0b04c46de376fddd6dda234318e200cc06f2ff636afc3d2cf. Jul 11 05:24:05.917340 kubelet[2327]: W0711 05:24:05.917261 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 11 05:24:05.917447 kubelet[2327]: E0711 05:24:05.917356 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:24:05.917771 systemd[1]: Started cri-containerd-aff03e3a659a78d23264523012c2796d72d86f584f3a7975f5d7c7b671153b0c.scope - libcontainer container aff03e3a659a78d23264523012c2796d72d86f584f3a7975f5d7c7b671153b0c. Jul 11 05:24:05.963328 containerd[1570]: time="2025-07-11T05:24:05.963274716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ceb1da8274a4204c91c39526c2a3aff5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ed7e5c63438186cead02b581347554d1ff518d785ba5b5b06c85e9273cb1d67\"" Jul 11 05:24:05.964522 kubelet[2327]: E0711 05:24:05.964496 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:05.966842 containerd[1570]: time="2025-07-11T05:24:05.966789758Z" level=info msg="CreateContainer within sandbox \"0ed7e5c63438186cead02b581347554d1ff518d785ba5b5b06c85e9273cb1d67\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 05:24:05.967366 kubelet[2327]: W0711 05:24:05.967286 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 11 05:24:05.967366 kubelet[2327]: E0711 05:24:05.967359 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:24:05.976310 containerd[1570]: time="2025-07-11T05:24:05.976261201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"aff03e3a659a78d23264523012c2796d72d86f584f3a7975f5d7c7b671153b0c\"" Jul 11 05:24:05.976959 kubelet[2327]: E0711 05:24:05.976927 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:05.978983 containerd[1570]: time="2025-07-11T05:24:05.978950130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa4c43402aceafa0b04c46de376fddd6dda234318e200cc06f2ff636afc3d2cf\"" Jul 11 05:24:05.979139 containerd[1570]: time="2025-07-11T05:24:05.979112452Z" level=info msg="CreateContainer within sandbox \"aff03e3a659a78d23264523012c2796d72d86f584f3a7975f5d7c7b671153b0c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 05:24:05.980400 kubelet[2327]: E0711 05:24:05.980360 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:05.982270 containerd[1570]: time="2025-07-11T05:24:05.982218213Z" level=info msg="Container b43ac858074df6a8cda259b455a3ea9760238923816cf6eac104bc5a059799cf: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:05.983322 containerd[1570]: time="2025-07-11T05:24:05.983294100Z" level=info msg="CreateContainer within sandbox \"aa4c43402aceafa0b04c46de376fddd6dda234318e200cc06f2ff636afc3d2cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 05:24:05.987552 containerd[1570]: time="2025-07-11T05:24:05.987449334Z" level=info msg="Container 2cdfe082bff3ef34b95b9a576af93b8ca5dd23009baa4d2089b3ec6d0d1974dd: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:05.993230 containerd[1570]: time="2025-07-11T05:24:05.993114526Z" level=info msg="CreateContainer within sandbox \"0ed7e5c63438186cead02b581347554d1ff518d785ba5b5b06c85e9273cb1d67\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b43ac858074df6a8cda259b455a3ea9760238923816cf6eac104bc5a059799cf\"" Jul 11 05:24:05.993760 containerd[1570]: time="2025-07-11T05:24:05.993729879Z" level=info msg="StartContainer for \"b43ac858074df6a8cda259b455a3ea9760238923816cf6eac104bc5a059799cf\"" Jul 11 05:24:05.994824 containerd[1570]: time="2025-07-11T05:24:05.994765821Z" level=info msg="connecting to shim b43ac858074df6a8cda259b455a3ea9760238923816cf6eac104bc5a059799cf" address="unix:///run/containerd/s/ac37cd4a2b8131393aae33039b1d9e0db04b087e9a55a2d365ef877442a271c5" protocol=ttrpc version=3 Jul 11 05:24:05.995087 containerd[1570]: time="2025-07-11T05:24:05.995045071Z" level=info msg="Container 426c3e5e937c3dee3bc15b36d178d710d40f9897cd78747171a01f73da3de9c4: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:06.000899 containerd[1570]: time="2025-07-11T05:24:06.000859867Z" level=info msg="CreateContainer within sandbox \"aff03e3a659a78d23264523012c2796d72d86f584f3a7975f5d7c7b671153b0c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2cdfe082bff3ef34b95b9a576af93b8ca5dd23009baa4d2089b3ec6d0d1974dd\"" Jul 11 05:24:06.001498 containerd[1570]: time="2025-07-11T05:24:06.001371527Z" level=info msg="StartContainer for \"2cdfe082bff3ef34b95b9a576af93b8ca5dd23009baa4d2089b3ec6d0d1974dd\"" Jul 11 05:24:06.002410 containerd[1570]: time="2025-07-11T05:24:06.002354232Z" level=info msg="connecting to shim 2cdfe082bff3ef34b95b9a576af93b8ca5dd23009baa4d2089b3ec6d0d1974dd" address="unix:///run/containerd/s/53610486717d0778a37b5afdbd1311e143f7f9f214fa801ad1ecdcc32f6a1d31" protocol=ttrpc version=3 Jul 11 05:24:06.003514 containerd[1570]: time="2025-07-11T05:24:06.003457791Z" level=info msg="CreateContainer within sandbox \"aa4c43402aceafa0b04c46de376fddd6dda234318e200cc06f2ff636afc3d2cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"426c3e5e937c3dee3bc15b36d178d710d40f9897cd78747171a01f73da3de9c4\"" Jul 11 05:24:06.004220 containerd[1570]: time="2025-07-11T05:24:06.004163485Z" level=info msg="StartContainer for \"426c3e5e937c3dee3bc15b36d178d710d40f9897cd78747171a01f73da3de9c4\"" Jul 11 05:24:06.005334 containerd[1570]: time="2025-07-11T05:24:06.005309664Z" level=info msg="connecting to shim 426c3e5e937c3dee3bc15b36d178d710d40f9897cd78747171a01f73da3de9c4" address="unix:///run/containerd/s/ccd51d95f1f0ac9a85579a73ea349e4aa58e5512cac76b1cc1a269bb13e65105" protocol=ttrpc version=3 Jul 11 05:24:06.023576 systemd[1]: Started cri-containerd-b43ac858074df6a8cda259b455a3ea9760238923816cf6eac104bc5a059799cf.scope - libcontainer container b43ac858074df6a8cda259b455a3ea9760238923816cf6eac104bc5a059799cf. Jul 11 05:24:06.028524 systemd[1]: Started cri-containerd-2cdfe082bff3ef34b95b9a576af93b8ca5dd23009baa4d2089b3ec6d0d1974dd.scope - libcontainer container 2cdfe082bff3ef34b95b9a576af93b8ca5dd23009baa4d2089b3ec6d0d1974dd. Jul 11 05:24:06.029981 systemd[1]: Started cri-containerd-426c3e5e937c3dee3bc15b36d178d710d40f9897cd78747171a01f73da3de9c4.scope - libcontainer container 426c3e5e937c3dee3bc15b36d178d710d40f9897cd78747171a01f73da3de9c4. Jul 11 05:24:06.073217 containerd[1570]: time="2025-07-11T05:24:06.073100867Z" level=info msg="StartContainer for \"b43ac858074df6a8cda259b455a3ea9760238923816cf6eac104bc5a059799cf\" returns successfully" Jul 11 05:24:06.086686 containerd[1570]: time="2025-07-11T05:24:06.086616878Z" level=info msg="StartContainer for \"426c3e5e937c3dee3bc15b36d178d710d40f9897cd78747171a01f73da3de9c4\" returns successfully" Jul 11 05:24:06.090693 containerd[1570]: time="2025-07-11T05:24:06.090663912Z" level=info msg="StartContainer for \"2cdfe082bff3ef34b95b9a576af93b8ca5dd23009baa4d2089b3ec6d0d1974dd\" returns successfully" Jul 11 05:24:06.629615 kubelet[2327]: I0711 05:24:06.629572 2327 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 05:24:06.777911 kubelet[2327]: E0711 05:24:06.777848 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:06.780704 kubelet[2327]: E0711 05:24:06.780674 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:06.785843 kubelet[2327]: E0711 05:24:06.785812 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:06.953424 kubelet[2327]: E0711 05:24:06.951745 2327 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 05:24:07.058104 kubelet[2327]: I0711 05:24:07.057996 2327 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 05:24:07.058104 kubelet[2327]: E0711 05:24:07.058035 2327 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 05:24:07.078603 kubelet[2327]: E0711 05:24:07.078560 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:24:07.178790 kubelet[2327]: E0711 05:24:07.178724 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:24:07.279978 kubelet[2327]: E0711 05:24:07.279843 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:24:07.380616 kubelet[2327]: E0711 05:24:07.380556 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:24:07.481302 kubelet[2327]: E0711 05:24:07.481239 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:24:07.582164 kubelet[2327]: E0711 05:24:07.582045 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:24:07.682718 kubelet[2327]: E0711 05:24:07.682660 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:24:07.789477 kubelet[2327]: E0711 05:24:07.789442 2327 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 05:24:07.789705 kubelet[2327]: E0711 05:24:07.789604 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:08.571821 kubelet[2327]: E0711 05:24:08.571771 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:08.744367 kubelet[2327]: I0711 05:24:08.744307 2327 apiserver.go:52] "Watching apiserver" Jul 11 05:24:08.754419 kubelet[2327]: I0711 05:24:08.754367 2327 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 05:24:08.786554 kubelet[2327]: E0711 05:24:08.786469 2327 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:09.685574 systemd[1]: Reload requested from client PID 2601 ('systemctl') (unit session-7.scope)... Jul 11 05:24:09.685589 systemd[1]: Reloading... Jul 11 05:24:09.771433 zram_generator::config[2647]: No configuration found. Jul 11 05:24:10.088367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:24:10.226703 systemd[1]: Reloading finished in 540 ms. Jul 11 05:24:10.254061 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:24:10.266768 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 05:24:10.267117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:24:10.267171 systemd[1]: kubelet.service: Consumed 982ms CPU time, 131.6M memory peak. Jul 11 05:24:10.269083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:24:10.496046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:24:10.504759 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 05:24:10.542972 kubelet[2689]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:24:10.542972 kubelet[2689]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 05:24:10.542972 kubelet[2689]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:24:10.543783 kubelet[2689]: I0711 05:24:10.543034 2689 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 05:24:10.551847 kubelet[2689]: I0711 05:24:10.551810 2689 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 05:24:10.552527 kubelet[2689]: I0711 05:24:10.552002 2689 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 05:24:10.552527 kubelet[2689]: I0711 05:24:10.552273 2689 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 05:24:10.553817 kubelet[2689]: I0711 05:24:10.553800 2689 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 05:24:10.556040 kubelet[2689]: I0711 05:24:10.555926 2689 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 05:24:10.561906 kubelet[2689]: I0711 05:24:10.561880 2689 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 05:24:10.566101 kubelet[2689]: I0711 05:24:10.566081 2689 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 05:24:10.566251 kubelet[2689]: I0711 05:24:10.566240 2689 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 05:24:10.566441 kubelet[2689]: I0711 05:24:10.566413 2689 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 05:24:10.566654 kubelet[2689]: I0711 05:24:10.566492 2689 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 05:24:10.566778 kubelet[2689]: I0711 05:24:10.566767 2689 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 05:24:10.566824 kubelet[2689]: I0711 05:24:10.566816 2689 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 05:24:10.566897 kubelet[2689]: I0711 05:24:10.566888 2689 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:24:10.567037 kubelet[2689]: I0711 05:24:10.567026 2689 kubelet.go:408] "Attempting to sync node with API server" Jul 11 05:24:10.567098 kubelet[2689]: I0711 05:24:10.567088 2689 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 05:24:10.567168 kubelet[2689]: I0711 05:24:10.567159 2689 kubelet.go:314] "Adding apiserver pod source" Jul 11 05:24:10.567217 kubelet[2689]: I0711 05:24:10.567209 2689 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 05:24:10.571247 kubelet[2689]: I0711 05:24:10.571222 2689 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 05:24:10.571647 kubelet[2689]: I0711 05:24:10.571631 2689 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 05:24:10.572547 kubelet[2689]: I0711 05:24:10.572524 2689 server.go:1274] "Started kubelet" Jul 11 05:24:10.572716 kubelet[2689]: I0711 05:24:10.572683 2689 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 05:24:10.572872 kubelet[2689]: I0711 05:24:10.572842 2689 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 05:24:10.573197 kubelet[2689]: I0711 05:24:10.573176 2689 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 05:24:10.575077 kubelet[2689]: I0711 05:24:10.574871 2689 server.go:449] "Adding debug handlers to kubelet server" Jul 11 05:24:10.578335 kubelet[2689]: I0711 05:24:10.578273 2689 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 05:24:10.579610 kubelet[2689]: I0711 05:24:10.579589 2689 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 05:24:10.579950 kubelet[2689]: I0711 05:24:10.579936 2689 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 05:24:10.580179 kubelet[2689]: I0711 05:24:10.580148 2689 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 05:24:10.580385 kubelet[2689]: I0711 05:24:10.580374 2689 reconciler.go:26] "Reconciler: start to sync state" Jul 11 05:24:10.581258 kubelet[2689]: I0711 05:24:10.581238 2689 factory.go:221] Registration of the systemd container factory successfully Jul 11 05:24:10.581353 kubelet[2689]: I0711 05:24:10.581334 2689 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 05:24:10.581927 kubelet[2689]: E0711 05:24:10.581598 2689 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 05:24:10.582481 kubelet[2689]: I0711 05:24:10.582462 2689 factory.go:221] Registration of the containerd container factory successfully Jul 11 05:24:10.591287 kubelet[2689]: I0711 05:24:10.591239 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 05:24:10.592479 kubelet[2689]: I0711 05:24:10.592458 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 05:24:10.592479 kubelet[2689]: I0711 05:24:10.592479 2689 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 05:24:10.592550 kubelet[2689]: I0711 05:24:10.592497 2689 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 05:24:10.592550 kubelet[2689]: E0711 05:24:10.592537 2689 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 05:24:10.616681 kubelet[2689]: I0711 05:24:10.616639 2689 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 05:24:10.616681 kubelet[2689]: I0711 05:24:10.616659 2689 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 05:24:10.616681 kubelet[2689]: I0711 05:24:10.616677 2689 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:24:10.616906 kubelet[2689]: I0711 05:24:10.616816 2689 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 05:24:10.616906 kubelet[2689]: I0711 05:24:10.616826 2689 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 05:24:10.616906 kubelet[2689]: I0711 05:24:10.616846 2689 policy_none.go:49] "None policy: Start" Jul 11 05:24:10.617376 kubelet[2689]: I0711 05:24:10.617358 2689 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 05:24:10.617376 kubelet[2689]: I0711 05:24:10.617376 2689 state_mem.go:35] "Initializing new in-memory state store" Jul 11 05:24:10.617537 kubelet[2689]: I0711 05:24:10.617519 2689 state_mem.go:75] "Updated machine memory state" Jul 11 05:24:10.621513 kubelet[2689]: I0711 05:24:10.621487 2689 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 05:24:10.621696 kubelet[2689]: I0711 05:24:10.621664 2689 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 05:24:10.621696 kubelet[2689]: I0711 05:24:10.621681 2689 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 05:24:10.621925 kubelet[2689]: I0711 05:24:10.621910 2689 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 05:24:10.685714 sudo[2727]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 11 05:24:10.686033 sudo[2727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 11 05:24:10.701015 kubelet[2689]: E0711 05:24:10.700976 2689 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 05:24:10.727696 kubelet[2689]: I0711 05:24:10.727667 2689 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 05:24:10.734125 kubelet[2689]: I0711 05:24:10.734094 2689 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 11 05:24:10.734218 kubelet[2689]: I0711 05:24:10.734170 2689 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 05:24:10.781820 kubelet[2689]: I0711 05:24:10.781603 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ceb1da8274a4204c91c39526c2a3aff5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ceb1da8274a4204c91c39526c2a3aff5\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:24:10.781820 kubelet[2689]: I0711 05:24:10.781642 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:24:10.781820 kubelet[2689]: I0711 05:24:10.781660 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 05:24:10.781820 kubelet[2689]: I0711 05:24:10.781673 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ceb1da8274a4204c91c39526c2a3aff5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ceb1da8274a4204c91c39526c2a3aff5\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:24:10.781820 kubelet[2689]: I0711 05:24:10.781686 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ceb1da8274a4204c91c39526c2a3aff5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ceb1da8274a4204c91c39526c2a3aff5\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:24:10.783008 kubelet[2689]: I0711 05:24:10.781699 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:24:10.783008 kubelet[2689]: I0711 05:24:10.781774 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:24:10.783008 kubelet[2689]: I0711 05:24:10.781857 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:24:10.783008 kubelet[2689]: I0711 05:24:10.781878 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:24:10.986079 sudo[2727]: pam_unix(sudo:session): session closed for user root Jul 11 05:24:10.999472 kubelet[2689]: E0711 05:24:10.999424 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:11.002084 kubelet[2689]: E0711 05:24:11.002042 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:11.002226 kubelet[2689]: E0711 05:24:11.002058 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:11.568703 kubelet[2689]: I0711 05:24:11.568647 2689 apiserver.go:52] "Watching apiserver" Jul 11 05:24:11.581123 kubelet[2689]: I0711 05:24:11.581079 2689 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 05:24:11.604310 kubelet[2689]: E0711 05:24:11.604250 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:11.604869 kubelet[2689]: E0711 05:24:11.604809 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:11.609492 kubelet[2689]: E0711 05:24:11.609448 2689 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 05:24:11.609689 kubelet[2689]: E0711 05:24:11.609668 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:11.637424 kubelet[2689]: I0711 05:24:11.637256 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.637229048 podStartE2EDuration="3.637229048s" podCreationTimestamp="2025-07-11 05:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:24:11.624966795 +0000 UTC m=+1.116146453" watchObservedRunningTime="2025-07-11 05:24:11.637229048 +0000 UTC m=+1.128408706" Jul 11 05:24:11.648710 kubelet[2689]: I0711 05:24:11.648643 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.648619244 podStartE2EDuration="1.648619244s" podCreationTimestamp="2025-07-11 05:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:24:11.639570012 +0000 UTC m=+1.130749670" watchObservedRunningTime="2025-07-11 05:24:11.648619244 +0000 UTC m=+1.139798903" Jul 11 05:24:11.648914 kubelet[2689]: I0711 05:24:11.648771 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6487661390000001 podStartE2EDuration="1.648766139s" podCreationTimestamp="2025-07-11 05:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:24:11.648735067 +0000 UTC m=+1.139914715" watchObservedRunningTime="2025-07-11 05:24:11.648766139 +0000 UTC m=+1.139945787" Jul 11 05:24:12.453192 sudo[1769]: pam_unix(sudo:session): session closed for user root Jul 11 05:24:12.454956 sshd[1768]: Connection closed by 10.0.0.1 port 45324 Jul 11 05:24:12.455385 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:12.460818 systemd[1]: sshd@6-10.0.0.94:22-10.0.0.1:45324.service: Deactivated successfully. Jul 11 05:24:12.463284 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 05:24:12.463526 systemd[1]: session-7.scope: Consumed 4.571s CPU time, 262.4M memory peak. Jul 11 05:24:12.465179 systemd-logind[1546]: Session 7 logged out. Waiting for processes to exit. Jul 11 05:24:12.466623 systemd-logind[1546]: Removed session 7. Jul 11 05:24:12.605634 kubelet[2689]: E0711 05:24:12.605599 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:12.606045 kubelet[2689]: E0711 05:24:12.605603 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:14.074850 kubelet[2689]: E0711 05:24:14.074811 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:14.327700 kubelet[2689]: I0711 05:24:14.327584 2689 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 05:24:14.327942 containerd[1570]: time="2025-07-11T05:24:14.327905610Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 05:24:14.328325 kubelet[2689]: I0711 05:24:14.328162 2689 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 05:24:15.263891 systemd[1]: Created slice kubepods-besteffort-pod0185b84e_5523_408b_88c8_32ddd08f6832.slice - libcontainer container kubepods-besteffort-pod0185b84e_5523_408b_88c8_32ddd08f6832.slice. Jul 11 05:24:15.277977 systemd[1]: Created slice kubepods-burstable-podd9030cb4_58cf_4b84_b64a_69e9ba0e2a87.slice - libcontainer container kubepods-burstable-podd9030cb4_58cf_4b84_b64a_69e9ba0e2a87.slice. Jul 11 05:24:15.310819 kubelet[2689]: I0711 05:24:15.310782 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cni-path\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.310819 kubelet[2689]: I0711 05:24:15.310824 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-host-proc-sys-kernel\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311237 kubelet[2689]: I0711 05:24:15.310845 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmtn\" (UniqueName: \"kubernetes.io/projected/0185b84e-5523-408b-88c8-32ddd08f6832-kube-api-access-hsmtn\") pod \"kube-proxy-t774l\" (UID: \"0185b84e-5523-408b-88c8-32ddd08f6832\") " pod="kube-system/kube-proxy-t774l" Jul 11 05:24:15.311237 kubelet[2689]: I0711 05:24:15.310866 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-lib-modules\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311237 kubelet[2689]: I0711 05:24:15.310886 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cilium-cgroup\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311237 kubelet[2689]: I0711 05:24:15.310907 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0185b84e-5523-408b-88c8-32ddd08f6832-kube-proxy\") pod \"kube-proxy-t774l\" (UID: \"0185b84e-5523-408b-88c8-32ddd08f6832\") " pod="kube-system/kube-proxy-t774l" Jul 11 05:24:15.311237 kubelet[2689]: I0711 05:24:15.310951 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-bpf-maps\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311237 kubelet[2689]: I0711 05:24:15.310998 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-xtables-lock\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311476 kubelet[2689]: I0711 05:24:15.311039 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-host-proc-sys-net\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311476 kubelet[2689]: I0711 05:24:15.311074 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8phd\" (UniqueName: \"kubernetes.io/projected/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-kube-api-access-q8phd\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311476 kubelet[2689]: I0711 05:24:15.311101 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0185b84e-5523-408b-88c8-32ddd08f6832-xtables-lock\") pod \"kube-proxy-t774l\" (UID: \"0185b84e-5523-408b-88c8-32ddd08f6832\") " pod="kube-system/kube-proxy-t774l" Jul 11 05:24:15.311476 kubelet[2689]: I0711 05:24:15.311135 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-clustermesh-secrets\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311476 kubelet[2689]: I0711 05:24:15.311207 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-etc-cni-netd\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311595 kubelet[2689]: I0711 05:24:15.311240 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cilium-config-path\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311595 kubelet[2689]: I0711 05:24:15.311263 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-hubble-tls\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311595 kubelet[2689]: I0711 05:24:15.311282 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cilium-run\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311595 kubelet[2689]: I0711 05:24:15.311303 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-hostproc\") pod \"cilium-nxwks\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " pod="kube-system/cilium-nxwks" Jul 11 05:24:15.311595 kubelet[2689]: I0711 05:24:15.311336 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0185b84e-5523-408b-88c8-32ddd08f6832-lib-modules\") pod \"kube-proxy-t774l\" (UID: \"0185b84e-5523-408b-88c8-32ddd08f6832\") " pod="kube-system/kube-proxy-t774l" Jul 11 05:24:15.525232 systemd[1]: Created slice kubepods-besteffort-pod1b2eaae6_b54a_4e1b_857d_10bc190f4db7.slice - libcontainer container kubepods-besteffort-pod1b2eaae6_b54a_4e1b_857d_10bc190f4db7.slice. Jul 11 05:24:15.573827 kubelet[2689]: E0711 05:24:15.573774 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:15.574373 containerd[1570]: time="2025-07-11T05:24:15.574324292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t774l,Uid:0185b84e-5523-408b-88c8-32ddd08f6832,Namespace:kube-system,Attempt:0,}" Jul 11 05:24:15.581670 kubelet[2689]: E0711 05:24:15.581646 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:15.582136 containerd[1570]: time="2025-07-11T05:24:15.582098570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nxwks,Uid:d9030cb4-58cf-4b84-b64a-69e9ba0e2a87,Namespace:kube-system,Attempt:0,}" Jul 11 05:24:15.612505 kubelet[2689]: I0711 05:24:15.612471 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b2eaae6-b54a-4e1b-857d-10bc190f4db7-cilium-config-path\") pod \"cilium-operator-5d85765b45-wzs4j\" (UID: \"1b2eaae6-b54a-4e1b-857d-10bc190f4db7\") " pod="kube-system/cilium-operator-5d85765b45-wzs4j" Jul 11 05:24:15.612622 kubelet[2689]: I0711 05:24:15.612509 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt8r2\" (UniqueName: \"kubernetes.io/projected/1b2eaae6-b54a-4e1b-857d-10bc190f4db7-kube-api-access-gt8r2\") pod \"cilium-operator-5d85765b45-wzs4j\" (UID: \"1b2eaae6-b54a-4e1b-857d-10bc190f4db7\") " pod="kube-system/cilium-operator-5d85765b45-wzs4j" Jul 11 05:24:15.712028 containerd[1570]: time="2025-07-11T05:24:15.711965344Z" level=info msg="connecting to shim 6553d490fe3657b2eb16b1acde15c19a595e99b9fa49e3889654a0d194ac0499" address="unix:///run/containerd/s/0892a7bc119fc4275dc08989af4e88b0846bb5f767b4eeb33931300b405a6865" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:24:15.716679 containerd[1570]: time="2025-07-11T05:24:15.716580881Z" level=info msg="connecting to shim 484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b" address="unix:///run/containerd/s/c079f9b0d7fcc250b50bc29751d5637b4eaf40d250f41d7788f4a330920771e1" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:24:15.759626 systemd[1]: Started cri-containerd-484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b.scope - libcontainer container 484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b. Jul 11 05:24:15.764052 systemd[1]: Started cri-containerd-6553d490fe3657b2eb16b1acde15c19a595e99b9fa49e3889654a0d194ac0499.scope - libcontainer container 6553d490fe3657b2eb16b1acde15c19a595e99b9fa49e3889654a0d194ac0499. Jul 11 05:24:15.792444 containerd[1570]: time="2025-07-11T05:24:15.792269253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nxwks,Uid:d9030cb4-58cf-4b84-b64a-69e9ba0e2a87,Namespace:kube-system,Attempt:0,} returns sandbox id \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\"" Jul 11 05:24:15.793684 kubelet[2689]: E0711 05:24:15.793626 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:15.795752 containerd[1570]: time="2025-07-11T05:24:15.795666877Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 05:24:15.806019 containerd[1570]: time="2025-07-11T05:24:15.805965117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t774l,Uid:0185b84e-5523-408b-88c8-32ddd08f6832,Namespace:kube-system,Attempt:0,} returns sandbox id \"6553d490fe3657b2eb16b1acde15c19a595e99b9fa49e3889654a0d194ac0499\"" Jul 11 05:24:15.806614 kubelet[2689]: E0711 05:24:15.806585 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:15.808533 containerd[1570]: time="2025-07-11T05:24:15.808502312Z" level=info msg="CreateContainer within sandbox \"6553d490fe3657b2eb16b1acde15c19a595e99b9fa49e3889654a0d194ac0499\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 05:24:15.820503 containerd[1570]: time="2025-07-11T05:24:15.820451224Z" level=info msg="Container c9180a3ae8a6f709acf4ba4605fe843922f6aabef782bbf3f7007435403d31d8: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:15.829153 kubelet[2689]: E0711 05:24:15.829105 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:15.829290 containerd[1570]: time="2025-07-11T05:24:15.829211874Z" level=info msg="CreateContainer within sandbox \"6553d490fe3657b2eb16b1acde15c19a595e99b9fa49e3889654a0d194ac0499\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c9180a3ae8a6f709acf4ba4605fe843922f6aabef782bbf3f7007435403d31d8\"" Jul 11 05:24:15.829981 containerd[1570]: time="2025-07-11T05:24:15.829952002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wzs4j,Uid:1b2eaae6-b54a-4e1b-857d-10bc190f4db7,Namespace:kube-system,Attempt:0,}" Jul 11 05:24:15.830289 containerd[1570]: time="2025-07-11T05:24:15.830229710Z" level=info msg="StartContainer for \"c9180a3ae8a6f709acf4ba4605fe843922f6aabef782bbf3f7007435403d31d8\"" Jul 11 05:24:15.832330 containerd[1570]: time="2025-07-11T05:24:15.832277119Z" level=info msg="connecting to shim c9180a3ae8a6f709acf4ba4605fe843922f6aabef782bbf3f7007435403d31d8" address="unix:///run/containerd/s/0892a7bc119fc4275dc08989af4e88b0846bb5f767b4eeb33931300b405a6865" protocol=ttrpc version=3 Jul 11 05:24:15.855141 containerd[1570]: time="2025-07-11T05:24:15.855077974Z" level=info msg="connecting to shim 999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d" address="unix:///run/containerd/s/41adb6e659c33fd66cd30952c124ec708f696b6a04e12ab486284ddb4b0f3e34" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:24:15.855604 systemd[1]: Started cri-containerd-c9180a3ae8a6f709acf4ba4605fe843922f6aabef782bbf3f7007435403d31d8.scope - libcontainer container c9180a3ae8a6f709acf4ba4605fe843922f6aabef782bbf3f7007435403d31d8. Jul 11 05:24:15.883534 systemd[1]: Started cri-containerd-999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d.scope - libcontainer container 999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d. Jul 11 05:24:16.041749 containerd[1570]: time="2025-07-11T05:24:16.041693926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wzs4j,Uid:1b2eaae6-b54a-4e1b-857d-10bc190f4db7,Namespace:kube-system,Attempt:0,} returns sandbox id \"999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d\"" Jul 11 05:24:16.042706 containerd[1570]: time="2025-07-11T05:24:16.042632684Z" level=info msg="StartContainer for \"c9180a3ae8a6f709acf4ba4605fe843922f6aabef782bbf3f7007435403d31d8\" returns successfully" Jul 11 05:24:16.042746 kubelet[2689]: E0711 05:24:16.042724 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:16.614482 kubelet[2689]: E0711 05:24:16.614379 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:19.132334 kubelet[2689]: E0711 05:24:19.132259 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:19.145653 kubelet[2689]: I0711 05:24:19.145603 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t774l" podStartSLOduration=4.145582557 podStartE2EDuration="4.145582557s" podCreationTimestamp="2025-07-11 05:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:24:16.961560502 +0000 UTC m=+6.452740160" watchObservedRunningTime="2025-07-11 05:24:19.145582557 +0000 UTC m=+8.636762215" Jul 11 05:24:19.619220 kubelet[2689]: E0711 05:24:19.619188 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:20.419830 kubelet[2689]: E0711 05:24:20.419781 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:20.620964 kubelet[2689]: E0711 05:24:20.620935 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:21.945195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2171479586.mount: Deactivated successfully. Jul 11 05:24:24.084931 kubelet[2689]: E0711 05:24:24.084871 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:24.626921 kubelet[2689]: E0711 05:24:24.626884 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:26.850189 containerd[1570]: time="2025-07-11T05:24:26.850124013Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:24:26.850990 containerd[1570]: time="2025-07-11T05:24:26.850905795Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 11 05:24:26.852039 containerd[1570]: time="2025-07-11T05:24:26.851992777Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:24:26.853572 containerd[1570]: time="2025-07-11T05:24:26.853521439Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.057815305s" Jul 11 05:24:26.853572 containerd[1570]: time="2025-07-11T05:24:26.853562169Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 11 05:24:26.854897 containerd[1570]: time="2025-07-11T05:24:26.854843420Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 05:24:26.856246 containerd[1570]: time="2025-07-11T05:24:26.856203187Z" level=info msg="CreateContainer within sandbox \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 05:24:26.865250 containerd[1570]: time="2025-07-11T05:24:26.865214737Z" level=info msg="Container e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:26.871791 containerd[1570]: time="2025-07-11T05:24:26.871738074Z" level=info msg="CreateContainer within sandbox \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03\"" Jul 11 05:24:26.872416 containerd[1570]: time="2025-07-11T05:24:26.872364363Z" level=info msg="StartContainer for \"e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03\"" Jul 11 05:24:26.873332 containerd[1570]: time="2025-07-11T05:24:26.873290396Z" level=info msg="connecting to shim e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03" address="unix:///run/containerd/s/c079f9b0d7fcc250b50bc29751d5637b4eaf40d250f41d7788f4a330920771e1" protocol=ttrpc version=3 Jul 11 05:24:26.922520 systemd[1]: Started cri-containerd-e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03.scope - libcontainer container e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03. Jul 11 05:24:26.955101 containerd[1570]: time="2025-07-11T05:24:26.955054438Z" level=info msg="StartContainer for \"e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03\" returns successfully" Jul 11 05:24:26.963828 systemd[1]: cri-containerd-e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03.scope: Deactivated successfully. Jul 11 05:24:26.965428 containerd[1570]: time="2025-07-11T05:24:26.965314831Z" level=info msg="received exit event container_id:\"e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03\" id:\"e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03\" pid:3115 exited_at:{seconds:1752211466 nanos:964772497}" Jul 11 05:24:26.965599 containerd[1570]: time="2025-07-11T05:24:26.965547696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03\" id:\"e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03\" pid:3115 exited_at:{seconds:1752211466 nanos:964772497}" Jul 11 05:24:26.985386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03-rootfs.mount: Deactivated successfully. Jul 11 05:24:27.633990 kubelet[2689]: E0711 05:24:27.633897 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:27.636295 containerd[1570]: time="2025-07-11T05:24:27.636244579Z" level=info msg="CreateContainer within sandbox \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 05:24:27.648953 containerd[1570]: time="2025-07-11T05:24:27.648888956Z" level=info msg="Container 72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:27.656070 containerd[1570]: time="2025-07-11T05:24:27.656017567Z" level=info msg="CreateContainer within sandbox \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f\"" Jul 11 05:24:27.656633 containerd[1570]: time="2025-07-11T05:24:27.656564081Z" level=info msg="StartContainer for \"72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f\"" Jul 11 05:24:27.657522 containerd[1570]: time="2025-07-11T05:24:27.657471264Z" level=info msg="connecting to shim 72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f" address="unix:///run/containerd/s/c079f9b0d7fcc250b50bc29751d5637b4eaf40d250f41d7788f4a330920771e1" protocol=ttrpc version=3 Jul 11 05:24:27.680557 systemd[1]: Started cri-containerd-72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f.scope - libcontainer container 72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f. Jul 11 05:24:27.715264 containerd[1570]: time="2025-07-11T05:24:27.715219316Z" level=info msg="StartContainer for \"72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f\" returns successfully" Jul 11 05:24:27.730553 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 05:24:27.731202 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:24:27.731513 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 11 05:24:27.733018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 05:24:27.734438 systemd[1]: cri-containerd-72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f.scope: Deactivated successfully. Jul 11 05:24:27.734630 containerd[1570]: time="2025-07-11T05:24:27.734584542Z" level=info msg="received exit event container_id:\"72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f\" id:\"72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f\" pid:3159 exited_at:{seconds:1752211467 nanos:734327934}" Jul 11 05:24:27.735080 containerd[1570]: time="2025-07-11T05:24:27.735032564Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f\" id:\"72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f\" pid:3159 exited_at:{seconds:1752211467 nanos:734327934}" Jul 11 05:24:27.763279 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:24:27.772526 update_engine[1553]: I20250711 05:24:27.772441 1553 update_attempter.cc:509] Updating boot flags... Jul 11 05:24:28.637941 kubelet[2689]: E0711 05:24:28.637827 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:28.639688 containerd[1570]: time="2025-07-11T05:24:28.639636998Z" level=info msg="CreateContainer within sandbox \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 05:24:28.657360 containerd[1570]: time="2025-07-11T05:24:28.657303651Z" level=info msg="Container df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:28.662199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051971637.mount: Deactivated successfully. Jul 11 05:24:28.667089 containerd[1570]: time="2025-07-11T05:24:28.667042608Z" level=info msg="CreateContainer within sandbox \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3\"" Jul 11 05:24:28.667614 containerd[1570]: time="2025-07-11T05:24:28.667574588Z" level=info msg="StartContainer for \"df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3\"" Jul 11 05:24:28.668934 containerd[1570]: time="2025-07-11T05:24:28.668893956Z" level=info msg="connecting to shim df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3" address="unix:///run/containerd/s/c079f9b0d7fcc250b50bc29751d5637b4eaf40d250f41d7788f4a330920771e1" protocol=ttrpc version=3 Jul 11 05:24:28.700575 systemd[1]: Started cri-containerd-df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3.scope - libcontainer container df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3. Jul 11 05:24:28.741284 systemd[1]: cri-containerd-df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3.scope: Deactivated successfully. Jul 11 05:24:28.743326 containerd[1570]: time="2025-07-11T05:24:28.743292640Z" level=info msg="StartContainer for \"df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3\" returns successfully" Jul 11 05:24:28.743597 containerd[1570]: time="2025-07-11T05:24:28.743552895Z" level=info msg="received exit event container_id:\"df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3\" id:\"df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3\" pid:3223 exited_at:{seconds:1752211468 nanos:743224660}" Jul 11 05:24:28.744063 containerd[1570]: time="2025-07-11T05:24:28.743571447Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3\" id:\"df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3\" pid:3223 exited_at:{seconds:1752211468 nanos:743224660}" Jul 11 05:24:28.766589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3-rootfs.mount: Deactivated successfully. Jul 11 05:24:29.645510 kubelet[2689]: E0711 05:24:29.645452 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:29.649071 containerd[1570]: time="2025-07-11T05:24:29.649012061Z" level=info msg="CreateContainer within sandbox \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 05:24:29.661950 containerd[1570]: time="2025-07-11T05:24:29.661887976Z" level=info msg="Container 8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:29.669866 containerd[1570]: time="2025-07-11T05:24:29.669809673Z" level=info msg="CreateContainer within sandbox \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9\"" Jul 11 05:24:29.670426 containerd[1570]: time="2025-07-11T05:24:29.670363995Z" level=info msg="StartContainer for \"8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9\"" Jul 11 05:24:29.671315 containerd[1570]: time="2025-07-11T05:24:29.671255969Z" level=info msg="connecting to shim 8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9" address="unix:///run/containerd/s/c079f9b0d7fcc250b50bc29751d5637b4eaf40d250f41d7788f4a330920771e1" protocol=ttrpc version=3 Jul 11 05:24:29.692582 systemd[1]: Started cri-containerd-8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9.scope - libcontainer container 8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9. Jul 11 05:24:29.722231 systemd[1]: cri-containerd-8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9.scope: Deactivated successfully. Jul 11 05:24:29.722988 containerd[1570]: time="2025-07-11T05:24:29.722946453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9\" id:\"8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9\" pid:3262 exited_at:{seconds:1752211469 nanos:722477001}" Jul 11 05:24:29.724770 containerd[1570]: time="2025-07-11T05:24:29.724735069Z" level=info msg="received exit event container_id:\"8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9\" id:\"8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9\" pid:3262 exited_at:{seconds:1752211469 nanos:722477001}" Jul 11 05:24:29.726775 containerd[1570]: time="2025-07-11T05:24:29.726740695Z" level=info msg="StartContainer for \"8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9\" returns successfully" Jul 11 05:24:29.745783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9-rootfs.mount: Deactivated successfully. Jul 11 05:24:30.431936 containerd[1570]: time="2025-07-11T05:24:30.431885163Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:24:30.432655 containerd[1570]: time="2025-07-11T05:24:30.432615866Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 11 05:24:30.433858 containerd[1570]: time="2025-07-11T05:24:30.433835420Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:24:30.434945 containerd[1570]: time="2025-07-11T05:24:30.434919694Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.580042015s" Jul 11 05:24:30.434991 containerd[1570]: time="2025-07-11T05:24:30.434949066Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 11 05:24:30.436773 containerd[1570]: time="2025-07-11T05:24:30.436749389Z" level=info msg="CreateContainer within sandbox \"999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 05:24:30.446997 containerd[1570]: time="2025-07-11T05:24:30.446947337Z" level=info msg="Container eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:30.453689 containerd[1570]: time="2025-07-11T05:24:30.453642431Z" level=info msg="CreateContainer within sandbox \"999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\"" Jul 11 05:24:30.454430 containerd[1570]: time="2025-07-11T05:24:30.454031184Z" level=info msg="StartContainer for \"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\"" Jul 11 05:24:30.454973 containerd[1570]: time="2025-07-11T05:24:30.454926386Z" level=info msg="connecting to shim eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e" address="unix:///run/containerd/s/41adb6e659c33fd66cd30952c124ec708f696b6a04e12ab486284ddb4b0f3e34" protocol=ttrpc version=3 Jul 11 05:24:30.476527 systemd[1]: Started cri-containerd-eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e.scope - libcontainer container eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e. Jul 11 05:24:30.505756 containerd[1570]: time="2025-07-11T05:24:30.505700198Z" level=info msg="StartContainer for \"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\" returns successfully" Jul 11 05:24:30.655134 kubelet[2689]: E0711 05:24:30.654564 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:30.661933 kubelet[2689]: E0711 05:24:30.661722 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:30.662721 containerd[1570]: time="2025-07-11T05:24:30.662501071Z" level=info msg="CreateContainer within sandbox \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 05:24:30.663915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819541511.mount: Deactivated successfully. Jul 11 05:24:30.680404 containerd[1570]: time="2025-07-11T05:24:30.680336007Z" level=info msg="Container 738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:30.689916 containerd[1570]: time="2025-07-11T05:24:30.689816555Z" level=info msg="CreateContainer within sandbox \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\"" Jul 11 05:24:30.690663 containerd[1570]: time="2025-07-11T05:24:30.690640141Z" level=info msg="StartContainer for \"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\"" Jul 11 05:24:30.691552 kubelet[2689]: I0711 05:24:30.691439 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-wzs4j" podStartSLOduration=1.299047638 podStartE2EDuration="15.691419611s" podCreationTimestamp="2025-07-11 05:24:15 +0000 UTC" firstStartedPulling="2025-07-11 05:24:16.043268869 +0000 UTC m=+5.534448527" lastFinishedPulling="2025-07-11 05:24:30.435640842 +0000 UTC m=+19.926820500" observedRunningTime="2025-07-11 05:24:30.691040816 +0000 UTC m=+20.182220474" watchObservedRunningTime="2025-07-11 05:24:30.691419611 +0000 UTC m=+20.182599269" Jul 11 05:24:30.692509 containerd[1570]: time="2025-07-11T05:24:30.692446155Z" level=info msg="connecting to shim 738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715" address="unix:///run/containerd/s/c079f9b0d7fcc250b50bc29751d5637b4eaf40d250f41d7788f4a330920771e1" protocol=ttrpc version=3 Jul 11 05:24:30.717645 systemd[1]: Started cri-containerd-738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715.scope - libcontainer container 738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715. Jul 11 05:24:30.757015 containerd[1570]: time="2025-07-11T05:24:30.756966699Z" level=info msg="StartContainer for \"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\" returns successfully" Jul 11 05:24:30.868442 containerd[1570]: time="2025-07-11T05:24:30.868046689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\" id:\"65736dee385a76db0d52e8aeaa58197bac5fc9ba94290463c18afd801d521099\" pid:3383 exited_at:{seconds:1752211470 nanos:867505258}" Jul 11 05:24:30.968137 kubelet[2689]: I0711 05:24:30.968004 2689 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 05:24:31.002920 systemd[1]: Created slice kubepods-burstable-pod67024fe5_de7c_4751_99ee_8c331db5f494.slice - libcontainer container kubepods-burstable-pod67024fe5_de7c_4751_99ee_8c331db5f494.slice. Jul 11 05:24:31.007431 kubelet[2689]: I0711 05:24:31.007184 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4tjm\" (UniqueName: \"kubernetes.io/projected/67024fe5-de7c-4751-99ee-8c331db5f494-kube-api-access-h4tjm\") pod \"coredns-7c65d6cfc9-lqxml\" (UID: \"67024fe5-de7c-4751-99ee-8c331db5f494\") " pod="kube-system/coredns-7c65d6cfc9-lqxml" Jul 11 05:24:31.007431 kubelet[2689]: I0711 05:24:31.007217 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67024fe5-de7c-4751-99ee-8c331db5f494-config-volume\") pod \"coredns-7c65d6cfc9-lqxml\" (UID: \"67024fe5-de7c-4751-99ee-8c331db5f494\") " pod="kube-system/coredns-7c65d6cfc9-lqxml" Jul 11 05:24:31.007431 kubelet[2689]: I0711 05:24:31.007233 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvm5h\" (UniqueName: \"kubernetes.io/projected/6d641e7c-0f93-421e-a94f-7a245108c110-kube-api-access-qvm5h\") pod \"coredns-7c65d6cfc9-r69s7\" (UID: \"6d641e7c-0f93-421e-a94f-7a245108c110\") " pod="kube-system/coredns-7c65d6cfc9-r69s7" Jul 11 05:24:31.007431 kubelet[2689]: I0711 05:24:31.007246 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d641e7c-0f93-421e-a94f-7a245108c110-config-volume\") pod \"coredns-7c65d6cfc9-r69s7\" (UID: \"6d641e7c-0f93-421e-a94f-7a245108c110\") " pod="kube-system/coredns-7c65d6cfc9-r69s7" Jul 11 05:24:31.013091 systemd[1]: Created slice kubepods-burstable-pod6d641e7c_0f93_421e_a94f_7a245108c110.slice - libcontainer container kubepods-burstable-pod6d641e7c_0f93_421e_a94f_7a245108c110.slice. Jul 11 05:24:31.307476 kubelet[2689]: E0711 05:24:31.307080 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:31.308168 containerd[1570]: time="2025-07-11T05:24:31.308098906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lqxml,Uid:67024fe5-de7c-4751-99ee-8c331db5f494,Namespace:kube-system,Attempt:0,}" Jul 11 05:24:31.317832 kubelet[2689]: E0711 05:24:31.317802 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:31.318243 containerd[1570]: time="2025-07-11T05:24:31.318190925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r69s7,Uid:6d641e7c-0f93-421e-a94f-7a245108c110,Namespace:kube-system,Attempt:0,}" Jul 11 05:24:31.670462 kubelet[2689]: E0711 05:24:31.670328 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:31.670462 kubelet[2689]: E0711 05:24:31.670363 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:31.688129 kubelet[2689]: I0711 05:24:31.688057 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nxwks" podStartSLOduration=5.628529959 podStartE2EDuration="16.688033315s" podCreationTimestamp="2025-07-11 05:24:15 +0000 UTC" firstStartedPulling="2025-07-11 05:24:15.795034582 +0000 UTC m=+5.286214240" lastFinishedPulling="2025-07-11 05:24:26.854537938 +0000 UTC m=+16.345717596" observedRunningTime="2025-07-11 05:24:31.686983077 +0000 UTC m=+21.178162745" watchObservedRunningTime="2025-07-11 05:24:31.688033315 +0000 UTC m=+21.179212974" Jul 11 05:24:32.672483 kubelet[2689]: E0711 05:24:32.672442 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:33.674168 kubelet[2689]: E0711 05:24:33.674119 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:34.045842 systemd-networkd[1473]: cilium_host: Link UP Jul 11 05:24:34.046008 systemd-networkd[1473]: cilium_net: Link UP Jul 11 05:24:34.046176 systemd-networkd[1473]: cilium_net: Gained carrier Jul 11 05:24:34.046334 systemd-networkd[1473]: cilium_host: Gained carrier Jul 11 05:24:34.077522 systemd-networkd[1473]: cilium_host: Gained IPv6LL Jul 11 05:24:34.144468 systemd-networkd[1473]: cilium_vxlan: Link UP Jul 11 05:24:34.144480 systemd-networkd[1473]: cilium_vxlan: Gained carrier Jul 11 05:24:34.354462 kernel: NET: Registered PF_ALG protocol family Jul 11 05:24:34.479687 systemd-networkd[1473]: cilium_net: Gained IPv6LL Jul 11 05:24:35.029460 systemd-networkd[1473]: lxc_health: Link UP Jul 11 05:24:35.030674 systemd-networkd[1473]: lxc_health: Gained carrier Jul 11 05:24:35.512463 kernel: eth0: renamed from tmp185ed Jul 11 05:24:35.528696 kernel: eth0: renamed from tmpe5b33 Jul 11 05:24:35.532163 systemd-networkd[1473]: lxc21c8b519992f: Link UP Jul 11 05:24:35.535696 systemd-networkd[1473]: lxcf821e6cf4bc2: Link UP Jul 11 05:24:35.538048 systemd-networkd[1473]: lxc21c8b519992f: Gained carrier Jul 11 05:24:35.538814 systemd-networkd[1473]: lxcf821e6cf4bc2: Gained carrier Jul 11 05:24:35.584811 kubelet[2689]: E0711 05:24:35.584757 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:36.136373 systemd-networkd[1473]: cilium_vxlan: Gained IPv6LL Jul 11 05:24:36.199719 systemd-networkd[1473]: lxc_health: Gained IPv6LL Jul 11 05:24:37.159598 systemd-networkd[1473]: lxc21c8b519992f: Gained IPv6LL Jul 11 05:24:37.223730 systemd-networkd[1473]: lxcf821e6cf4bc2: Gained IPv6LL Jul 11 05:24:39.111119 containerd[1570]: time="2025-07-11T05:24:39.111064529Z" level=info msg="connecting to shim 185ed202d9340cc54660e4530161f5fe4623f1c2294d88b6dbe32706635a8a95" address="unix:///run/containerd/s/4e14c230d39538d8fae38a455c01c94a529806d3784c95375048ab1fea15f61d" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:24:39.113149 containerd[1570]: time="2025-07-11T05:24:39.113114631Z" level=info msg="connecting to shim e5b33ac499ece278d323cfa7726153614051e54e79eb839846436a50fd6c535d" address="unix:///run/containerd/s/1ae199447ba72d234f2d5d877cfeb9243e9b545ed0aba8495c1161ad173bd5d3" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:24:39.143583 systemd[1]: Started cri-containerd-e5b33ac499ece278d323cfa7726153614051e54e79eb839846436a50fd6c535d.scope - libcontainer container e5b33ac499ece278d323cfa7726153614051e54e79eb839846436a50fd6c535d. Jul 11 05:24:39.147290 systemd[1]: Started cri-containerd-185ed202d9340cc54660e4530161f5fe4623f1c2294d88b6dbe32706635a8a95.scope - libcontainer container 185ed202d9340cc54660e4530161f5fe4623f1c2294d88b6dbe32706635a8a95. Jul 11 05:24:39.159334 systemd-resolved[1481]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:24:39.164907 systemd-resolved[1481]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:24:39.306679 containerd[1570]: time="2025-07-11T05:24:39.306617021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lqxml,Uid:67024fe5-de7c-4751-99ee-8c331db5f494,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5b33ac499ece278d323cfa7726153614051e54e79eb839846436a50fd6c535d\"" Jul 11 05:24:39.310725 kubelet[2689]: E0711 05:24:39.310682 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:39.312024 containerd[1570]: time="2025-07-11T05:24:39.311992124Z" level=info msg="CreateContainer within sandbox \"e5b33ac499ece278d323cfa7726153614051e54e79eb839846436a50fd6c535d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 05:24:39.351764 containerd[1570]: time="2025-07-11T05:24:39.351707272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r69s7,Uid:6d641e7c-0f93-421e-a94f-7a245108c110,Namespace:kube-system,Attempt:0,} returns sandbox id \"185ed202d9340cc54660e4530161f5fe4623f1c2294d88b6dbe32706635a8a95\"" Jul 11 05:24:39.352421 kubelet[2689]: E0711 05:24:39.352382 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:39.354027 containerd[1570]: time="2025-07-11T05:24:39.354002272Z" level=info msg="CreateContainer within sandbox \"185ed202d9340cc54660e4530161f5fe4623f1c2294d88b6dbe32706635a8a95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 05:24:39.633166 containerd[1570]: time="2025-07-11T05:24:39.633109249Z" level=info msg="Container 9d30f83f1136262a1593a03baba879b334f863f926656e9b151bb988e6eb084a: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:39.670509 systemd[1]: Started sshd@7-10.0.0.94:22-10.0.0.1:53128.service - OpenSSH per-connection server daemon (10.0.0.1:53128). Jul 11 05:24:39.677491 containerd[1570]: time="2025-07-11T05:24:39.677439062Z" level=info msg="Container ac2e27952502d114c505f35f5c8d463c190cf4a689fe7eb7659c5c8e78445598: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:24:39.737480 sshd[3946]: Accepted publickey for core from 10.0.0.1 port 53128 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:39.738978 sshd-session[3946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:39.742205 containerd[1570]: time="2025-07-11T05:24:39.742167194Z" level=info msg="CreateContainer within sandbox \"e5b33ac499ece278d323cfa7726153614051e54e79eb839846436a50fd6c535d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d30f83f1136262a1593a03baba879b334f863f926656e9b151bb988e6eb084a\"" Jul 11 05:24:39.742755 containerd[1570]: time="2025-07-11T05:24:39.742719611Z" level=info msg="StartContainer for \"9d30f83f1136262a1593a03baba879b334f863f926656e9b151bb988e6eb084a\"" Jul 11 05:24:39.744225 containerd[1570]: time="2025-07-11T05:24:39.744188686Z" level=info msg="connecting to shim 9d30f83f1136262a1593a03baba879b334f863f926656e9b151bb988e6eb084a" address="unix:///run/containerd/s/1ae199447ba72d234f2d5d877cfeb9243e9b545ed0aba8495c1161ad173bd5d3" protocol=ttrpc version=3 Jul 11 05:24:39.744268 systemd-logind[1546]: New session 8 of user core. Jul 11 05:24:39.750040 containerd[1570]: time="2025-07-11T05:24:39.750006219Z" level=info msg="CreateContainer within sandbox \"185ed202d9340cc54660e4530161f5fe4623f1c2294d88b6dbe32706635a8a95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac2e27952502d114c505f35f5c8d463c190cf4a689fe7eb7659c5c8e78445598\"" Jul 11 05:24:39.750770 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 05:24:39.751066 containerd[1570]: time="2025-07-11T05:24:39.750769562Z" level=info msg="StartContainer for \"ac2e27952502d114c505f35f5c8d463c190cf4a689fe7eb7659c5c8e78445598\"" Jul 11 05:24:39.751894 containerd[1570]: time="2025-07-11T05:24:39.751866361Z" level=info msg="connecting to shim ac2e27952502d114c505f35f5c8d463c190cf4a689fe7eb7659c5c8e78445598" address="unix:///run/containerd/s/4e14c230d39538d8fae38a455c01c94a529806d3784c95375048ab1fea15f61d" protocol=ttrpc version=3 Jul 11 05:24:39.784607 systemd[1]: Started cri-containerd-9d30f83f1136262a1593a03baba879b334f863f926656e9b151bb988e6eb084a.scope - libcontainer container 9d30f83f1136262a1593a03baba879b334f863f926656e9b151bb988e6eb084a. Jul 11 05:24:39.786645 systemd[1]: Started cri-containerd-ac2e27952502d114c505f35f5c8d463c190cf4a689fe7eb7659c5c8e78445598.scope - libcontainer container ac2e27952502d114c505f35f5c8d463c190cf4a689fe7eb7659c5c8e78445598. Jul 11 05:24:39.834863 containerd[1570]: time="2025-07-11T05:24:39.834781024Z" level=info msg="StartContainer for \"9d30f83f1136262a1593a03baba879b334f863f926656e9b151bb988e6eb084a\" returns successfully" Jul 11 05:24:39.842588 containerd[1570]: time="2025-07-11T05:24:39.842534414Z" level=info msg="StartContainer for \"ac2e27952502d114c505f35f5c8d463c190cf4a689fe7eb7659c5c8e78445598\" returns successfully" Jul 11 05:24:39.905630 sshd[3949]: Connection closed by 10.0.0.1 port 53128 Jul 11 05:24:39.905320 sshd-session[3946]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:39.910576 systemd[1]: sshd@7-10.0.0.94:22-10.0.0.1:53128.service: Deactivated successfully. Jul 11 05:24:39.913096 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 05:24:39.914091 systemd-logind[1546]: Session 8 logged out. Waiting for processes to exit. Jul 11 05:24:39.915440 systemd-logind[1546]: Removed session 8. Jul 11 05:24:40.694337 kubelet[2689]: E0711 05:24:40.694277 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:40.698479 kubelet[2689]: E0711 05:24:40.698441 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:40.707805 kubelet[2689]: I0711 05:24:40.707711 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-r69s7" podStartSLOduration=25.707690031 podStartE2EDuration="25.707690031s" podCreationTimestamp="2025-07-11 05:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:24:40.706499781 +0000 UTC m=+30.197679440" watchObservedRunningTime="2025-07-11 05:24:40.707690031 +0000 UTC m=+30.198869689" Jul 11 05:24:40.718815 kubelet[2689]: I0711 05:24:40.718629 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lqxml" podStartSLOduration=25.718610269 podStartE2EDuration="25.718610269s" podCreationTimestamp="2025-07-11 05:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:24:40.718608336 +0000 UTC m=+30.209787994" watchObservedRunningTime="2025-07-11 05:24:40.718610269 +0000 UTC m=+30.209789927" Jul 11 05:24:41.218525 kubelet[2689]: I0711 05:24:41.218479 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 05:24:41.218992 kubelet[2689]: E0711 05:24:41.218971 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:41.699504 kubelet[2689]: E0711 05:24:41.699463 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:41.700023 kubelet[2689]: E0711 05:24:41.699572 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:41.700023 kubelet[2689]: E0711 05:24:41.699778 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:42.701611 kubelet[2689]: E0711 05:24:42.701556 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:42.702099 kubelet[2689]: E0711 05:24:42.701667 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:24:44.918167 systemd[1]: Started sshd@8-10.0.0.94:22-10.0.0.1:53144.service - OpenSSH per-connection server daemon (10.0.0.1:53144). Jul 11 05:24:44.968636 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 53144 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:44.970200 sshd-session[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:44.974886 systemd-logind[1546]: New session 9 of user core. Jul 11 05:24:44.985530 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 05:24:45.099298 sshd[4041]: Connection closed by 10.0.0.1 port 53144 Jul 11 05:24:45.099661 sshd-session[4038]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:45.103508 systemd[1]: sshd@8-10.0.0.94:22-10.0.0.1:53144.service: Deactivated successfully. Jul 11 05:24:45.105431 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 05:24:45.106134 systemd-logind[1546]: Session 9 logged out. Waiting for processes to exit. Jul 11 05:24:45.107247 systemd-logind[1546]: Removed session 9. Jul 11 05:24:50.116124 systemd[1]: Started sshd@9-10.0.0.94:22-10.0.0.1:55138.service - OpenSSH per-connection server daemon (10.0.0.1:55138). Jul 11 05:24:50.182220 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 55138 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:50.183707 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:50.188145 systemd-logind[1546]: New session 10 of user core. Jul 11 05:24:50.198575 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 05:24:50.312053 sshd[4060]: Connection closed by 10.0.0.1 port 55138 Jul 11 05:24:50.312447 sshd-session[4057]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:50.318289 systemd[1]: sshd@9-10.0.0.94:22-10.0.0.1:55138.service: Deactivated successfully. Jul 11 05:24:50.320348 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 05:24:50.321245 systemd-logind[1546]: Session 10 logged out. Waiting for processes to exit. Jul 11 05:24:50.322752 systemd-logind[1546]: Removed session 10. Jul 11 05:24:55.330213 systemd[1]: Started sshd@10-10.0.0.94:22-10.0.0.1:55150.service - OpenSSH per-connection server daemon (10.0.0.1:55150). Jul 11 05:24:55.397772 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 55150 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:55.399660 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:55.404057 systemd-logind[1546]: New session 11 of user core. Jul 11 05:24:55.412764 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 05:24:55.522953 sshd[4077]: Connection closed by 10.0.0.1 port 55150 Jul 11 05:24:55.523452 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:55.533937 systemd[1]: sshd@10-10.0.0.94:22-10.0.0.1:55150.service: Deactivated successfully. Jul 11 05:24:55.535797 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 05:24:55.536633 systemd-logind[1546]: Session 11 logged out. Waiting for processes to exit. Jul 11 05:24:55.539364 systemd[1]: Started sshd@11-10.0.0.94:22-10.0.0.1:55156.service - OpenSSH per-connection server daemon (10.0.0.1:55156). Jul 11 05:24:55.540284 systemd-logind[1546]: Removed session 11. Jul 11 05:24:55.597903 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 55156 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:55.599876 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:55.608446 systemd-logind[1546]: New session 12 of user core. Jul 11 05:24:55.616676 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 05:24:55.762142 sshd[4094]: Connection closed by 10.0.0.1 port 55156 Jul 11 05:24:55.763504 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:55.773639 systemd[1]: sshd@11-10.0.0.94:22-10.0.0.1:55156.service: Deactivated successfully. Jul 11 05:24:55.776127 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 05:24:55.777753 systemd-logind[1546]: Session 12 logged out. Waiting for processes to exit. Jul 11 05:24:55.782665 systemd[1]: Started sshd@12-10.0.0.94:22-10.0.0.1:55170.service - OpenSSH per-connection server daemon (10.0.0.1:55170). Jul 11 05:24:55.784606 systemd-logind[1546]: Removed session 12. Jul 11 05:24:55.841325 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 55170 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:24:55.843101 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:24:55.847504 systemd-logind[1546]: New session 13 of user core. Jul 11 05:24:55.862545 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 05:24:55.971051 sshd[4109]: Connection closed by 10.0.0.1 port 55170 Jul 11 05:24:55.971358 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Jul 11 05:24:55.975899 systemd[1]: sshd@12-10.0.0.94:22-10.0.0.1:55170.service: Deactivated successfully. Jul 11 05:24:55.977864 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 05:24:55.978675 systemd-logind[1546]: Session 13 logged out. Waiting for processes to exit. Jul 11 05:24:55.979812 systemd-logind[1546]: Removed session 13. Jul 11 05:25:00.987856 systemd[1]: Started sshd@13-10.0.0.94:22-10.0.0.1:42522.service - OpenSSH per-connection server daemon (10.0.0.1:42522). Jul 11 05:25:01.046529 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 42522 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:01.048377 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:01.052906 systemd-logind[1546]: New session 14 of user core. Jul 11 05:25:01.061633 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 05:25:01.176277 sshd[4125]: Connection closed by 10.0.0.1 port 42522 Jul 11 05:25:01.176679 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:01.182027 systemd[1]: sshd@13-10.0.0.94:22-10.0.0.1:42522.service: Deactivated successfully. Jul 11 05:25:01.184060 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 05:25:01.185012 systemd-logind[1546]: Session 14 logged out. Waiting for processes to exit. Jul 11 05:25:01.186487 systemd-logind[1546]: Removed session 14. Jul 11 05:25:06.191917 systemd[1]: Started sshd@14-10.0.0.94:22-10.0.0.1:42534.service - OpenSSH per-connection server daemon (10.0.0.1:42534). Jul 11 05:25:06.245474 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 42534 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:06.246784 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:06.250699 systemd-logind[1546]: New session 15 of user core. Jul 11 05:25:06.260508 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 05:25:06.370785 sshd[4142]: Connection closed by 10.0.0.1 port 42534 Jul 11 05:25:06.371239 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:06.378905 systemd[1]: sshd@14-10.0.0.94:22-10.0.0.1:42534.service: Deactivated successfully. Jul 11 05:25:06.380845 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 05:25:06.381672 systemd-logind[1546]: Session 15 logged out. Waiting for processes to exit. Jul 11 05:25:06.384188 systemd[1]: Started sshd@15-10.0.0.94:22-10.0.0.1:42540.service - OpenSSH per-connection server daemon (10.0.0.1:42540). Jul 11 05:25:06.385248 systemd-logind[1546]: Removed session 15. Jul 11 05:25:06.439976 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 42540 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:06.441667 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:06.446320 systemd-logind[1546]: New session 16 of user core. Jul 11 05:25:06.454497 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 05:25:06.677971 sshd[4158]: Connection closed by 10.0.0.1 port 42540 Jul 11 05:25:06.678292 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:06.693939 systemd[1]: sshd@15-10.0.0.94:22-10.0.0.1:42540.service: Deactivated successfully. Jul 11 05:25:06.695600 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 05:25:06.696359 systemd-logind[1546]: Session 16 logged out. Waiting for processes to exit. Jul 11 05:25:06.698604 systemd[1]: Started sshd@16-10.0.0.94:22-10.0.0.1:42550.service - OpenSSH per-connection server daemon (10.0.0.1:42550). Jul 11 05:25:06.699619 systemd-logind[1546]: Removed session 16. Jul 11 05:25:06.749749 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 42550 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:06.751114 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:06.755028 systemd-logind[1546]: New session 17 of user core. Jul 11 05:25:06.766541 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 05:25:08.196484 sshd[4172]: Connection closed by 10.0.0.1 port 42550 Jul 11 05:25:08.197782 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:08.205436 systemd[1]: sshd@16-10.0.0.94:22-10.0.0.1:42550.service: Deactivated successfully. Jul 11 05:25:08.207464 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 05:25:08.208662 systemd-logind[1546]: Session 17 logged out. Waiting for processes to exit. Jul 11 05:25:08.212477 systemd-logind[1546]: Removed session 17. Jul 11 05:25:08.213682 systemd[1]: Started sshd@17-10.0.0.94:22-10.0.0.1:42560.service - OpenSSH per-connection server daemon (10.0.0.1:42560). Jul 11 05:25:08.267959 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 42560 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:08.269150 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:08.274253 systemd-logind[1546]: New session 18 of user core. Jul 11 05:25:08.287526 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 05:25:08.503145 sshd[4194]: Connection closed by 10.0.0.1 port 42560 Jul 11 05:25:08.503916 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:08.515297 systemd[1]: sshd@17-10.0.0.94:22-10.0.0.1:42560.service: Deactivated successfully. Jul 11 05:25:08.517230 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 05:25:08.519024 systemd-logind[1546]: Session 18 logged out. Waiting for processes to exit. Jul 11 05:25:08.521359 systemd[1]: Started sshd@18-10.0.0.94:22-10.0.0.1:42564.service - OpenSSH per-connection server daemon (10.0.0.1:42564). Jul 11 05:25:08.522045 systemd-logind[1546]: Removed session 18. Jul 11 05:25:08.579361 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 42564 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:08.581017 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:08.585478 systemd-logind[1546]: New session 19 of user core. Jul 11 05:25:08.593603 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 05:25:08.702819 sshd[4208]: Connection closed by 10.0.0.1 port 42564 Jul 11 05:25:08.703149 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:08.708067 systemd[1]: sshd@18-10.0.0.94:22-10.0.0.1:42564.service: Deactivated successfully. Jul 11 05:25:08.710049 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 05:25:08.710939 systemd-logind[1546]: Session 19 logged out. Waiting for processes to exit. Jul 11 05:25:08.712069 systemd-logind[1546]: Removed session 19. Jul 11 05:25:13.717984 systemd[1]: Started sshd@19-10.0.0.94:22-10.0.0.1:39324.service - OpenSSH per-connection server daemon (10.0.0.1:39324). Jul 11 05:25:13.770724 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 39324 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:13.771946 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:13.776026 systemd-logind[1546]: New session 20 of user core. Jul 11 05:25:13.784524 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 05:25:13.891486 sshd[4229]: Connection closed by 10.0.0.1 port 39324 Jul 11 05:25:13.891838 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:13.896314 systemd[1]: sshd@19-10.0.0.94:22-10.0.0.1:39324.service: Deactivated successfully. Jul 11 05:25:13.898167 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 05:25:13.898917 systemd-logind[1546]: Session 20 logged out. Waiting for processes to exit. Jul 11 05:25:13.899887 systemd-logind[1546]: Removed session 20. Jul 11 05:25:18.908480 systemd[1]: Started sshd@20-10.0.0.94:22-10.0.0.1:39334.service - OpenSSH per-connection server daemon (10.0.0.1:39334). Jul 11 05:25:18.953662 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 39334 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:18.955267 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:18.959686 systemd-logind[1546]: New session 21 of user core. Jul 11 05:25:18.970521 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 05:25:19.081165 sshd[4247]: Connection closed by 10.0.0.1 port 39334 Jul 11 05:25:19.081551 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:19.086386 systemd[1]: sshd@20-10.0.0.94:22-10.0.0.1:39334.service: Deactivated successfully. Jul 11 05:25:19.088443 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 05:25:19.089219 systemd-logind[1546]: Session 21 logged out. Waiting for processes to exit. Jul 11 05:25:19.090549 systemd-logind[1546]: Removed session 21. Jul 11 05:25:20.593951 kubelet[2689]: E0711 05:25:20.593898 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:22.593498 kubelet[2689]: E0711 05:25:22.593453 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:24.092903 systemd[1]: Started sshd@21-10.0.0.94:22-10.0.0.1:39012.service - OpenSSH per-connection server daemon (10.0.0.1:39012). Jul 11 05:25:24.148770 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 39012 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:24.150266 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:24.154522 systemd-logind[1546]: New session 22 of user core. Jul 11 05:25:24.164524 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 05:25:24.268725 sshd[4263]: Connection closed by 10.0.0.1 port 39012 Jul 11 05:25:24.269060 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:24.283496 systemd[1]: sshd@21-10.0.0.94:22-10.0.0.1:39012.service: Deactivated successfully. Jul 11 05:25:24.285585 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 05:25:24.286463 systemd-logind[1546]: Session 22 logged out. Waiting for processes to exit. Jul 11 05:25:24.289309 systemd[1]: Started sshd@22-10.0.0.94:22-10.0.0.1:39026.service - OpenSSH per-connection server daemon (10.0.0.1:39026). Jul 11 05:25:24.290079 systemd-logind[1546]: Removed session 22. Jul 11 05:25:24.348220 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 39026 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:24.349888 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:24.354537 systemd-logind[1546]: New session 23 of user core. Jul 11 05:25:24.371544 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 05:25:25.696211 containerd[1570]: time="2025-07-11T05:25:25.695701729Z" level=info msg="StopContainer for \"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\" with timeout 30 (s)" Jul 11 05:25:25.697705 containerd[1570]: time="2025-07-11T05:25:25.697681892Z" level=info msg="Stop container \"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\" with signal terminated" Jul 11 05:25:25.710766 systemd[1]: cri-containerd-eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e.scope: Deactivated successfully. Jul 11 05:25:25.735826 containerd[1570]: time="2025-07-11T05:25:25.713354750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\" id:\"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\" pid:3316 exited_at:{seconds:1752211525 nanos:712692703}" Jul 11 05:25:25.735826 containerd[1570]: time="2025-07-11T05:25:25.713456519Z" level=info msg="received exit event container_id:\"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\" id:\"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\" pid:3316 exited_at:{seconds:1752211525 nanos:712692703}" Jul 11 05:25:25.735999 containerd[1570]: time="2025-07-11T05:25:25.735836157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\" id:\"c4edfeabd480aee87c70a9754e97bae0d88336feb23256b27d0006a2a987005d\" pid:4307 exited_at:{seconds:1752211525 nanos:722413151}" Jul 11 05:25:25.735999 containerd[1570]: time="2025-07-11T05:25:25.734761724Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 05:25:25.735999 containerd[1570]: time="2025-07-11T05:25:25.724257282Z" level=info msg="StopContainer for \"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\" with timeout 2 (s)" Jul 11 05:25:25.736184 containerd[1570]: time="2025-07-11T05:25:25.736148366Z" level=info msg="Stop container \"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\" with signal terminated" Jul 11 05:25:25.744606 systemd-networkd[1473]: lxc_health: Link DOWN Jul 11 05:25:25.744616 systemd-networkd[1473]: lxc_health: Lost carrier Jul 11 05:25:25.758625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e-rootfs.mount: Deactivated successfully. Jul 11 05:25:25.760351 systemd[1]: cri-containerd-738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715.scope: Deactivated successfully. Jul 11 05:25:25.760755 systemd[1]: cri-containerd-738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715.scope: Consumed 6.953s CPU time, 127.3M memory peak, 276K read from disk, 13.3M written to disk. Jul 11 05:25:25.761572 containerd[1570]: time="2025-07-11T05:25:25.761516928Z" level=info msg="received exit event container_id:\"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\" id:\"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\" pid:3352 exited_at:{seconds:1752211525 nanos:761256615}" Jul 11 05:25:25.761737 containerd[1570]: time="2025-07-11T05:25:25.761657398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\" id:\"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\" pid:3352 exited_at:{seconds:1752211525 nanos:761256615}" Jul 11 05:25:25.780963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715-rootfs.mount: Deactivated successfully. Jul 11 05:25:25.785200 containerd[1570]: time="2025-07-11T05:25:25.785158536Z" level=info msg="StopContainer for \"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\" returns successfully" Jul 11 05:25:25.786950 containerd[1570]: time="2025-07-11T05:25:25.786918641Z" level=info msg="StopContainer for \"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\" returns successfully" Jul 11 05:25:25.787722 containerd[1570]: time="2025-07-11T05:25:25.787697586Z" level=info msg="StopPodSandbox for \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\"" Jul 11 05:25:25.788355 containerd[1570]: time="2025-07-11T05:25:25.788308068Z" level=info msg="StopPodSandbox for \"999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d\"" Jul 11 05:25:25.792177 containerd[1570]: time="2025-07-11T05:25:25.792127331Z" level=info msg="Container to stop \"8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 05:25:25.792177 containerd[1570]: time="2025-07-11T05:25:25.792153550Z" level=info msg="Container to stop \"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 05:25:25.792177 containerd[1570]: time="2025-07-11T05:25:25.792172224Z" level=info msg="Container to stop \"e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 05:25:25.792297 containerd[1570]: time="2025-07-11T05:25:25.792183666Z" level=info msg="Container to stop \"df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 05:25:25.792297 containerd[1570]: time="2025-07-11T05:25:25.792194847Z" level=info msg="Container to stop \"72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 05:25:25.793342 containerd[1570]: time="2025-07-11T05:25:25.793305046Z" level=info msg="Container to stop \"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 05:25:25.799550 systemd[1]: cri-containerd-484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b.scope: Deactivated successfully. Jul 11 05:25:25.800823 systemd[1]: cri-containerd-999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d.scope: Deactivated successfully. Jul 11 05:25:25.801377 containerd[1570]: time="2025-07-11T05:25:25.801317096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" id:\"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" pid:2841 exit_status:137 exited_at:{seconds:1752211525 nanos:800291363}" Jul 11 05:25:25.828985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d-rootfs.mount: Deactivated successfully. Jul 11 05:25:25.829112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b-rootfs.mount: Deactivated successfully. Jul 11 05:25:25.833598 containerd[1570]: time="2025-07-11T05:25:25.833541864Z" level=info msg="shim disconnected" id=484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b namespace=k8s.io Jul 11 05:25:25.833708 containerd[1570]: time="2025-07-11T05:25:25.833651578Z" level=warning msg="cleaning up after shim disconnected" id=484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b namespace=k8s.io Jul 11 05:25:25.846322 containerd[1570]: time="2025-07-11T05:25:25.833666254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 05:25:25.846424 containerd[1570]: time="2025-07-11T05:25:25.834536899Z" level=info msg="shim disconnected" id=999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d namespace=k8s.io Jul 11 05:25:25.846424 containerd[1570]: time="2025-07-11T05:25:25.846380275Z" level=warning msg="cleaning up after shim disconnected" id=999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d namespace=k8s.io Jul 11 05:25:25.846488 containerd[1570]: time="2025-07-11T05:25:25.846406204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 05:25:25.872612 containerd[1570]: time="2025-07-11T05:25:25.872546857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d\" id:\"999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d\" pid:2916 exit_status:137 exited_at:{seconds:1752211525 nanos:804630351}" Jul 11 05:25:25.874359 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d-shm.mount: Deactivated successfully. Jul 11 05:25:25.874489 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b-shm.mount: Deactivated successfully. Jul 11 05:25:25.875675 containerd[1570]: time="2025-07-11T05:25:25.874916151Z" level=info msg="TearDown network for sandbox \"999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d\" successfully" Jul 11 05:25:25.875675 containerd[1570]: time="2025-07-11T05:25:25.875092298Z" level=info msg="StopPodSandbox for \"999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d\" returns successfully" Jul 11 05:25:25.882710 containerd[1570]: time="2025-07-11T05:25:25.882470773Z" level=info msg="received exit event sandbox_id:\"999e4bb15bf9af11f43188211a775d192c870a2eb40c442674dac02505faaf5d\" exit_status:137 exited_at:{seconds:1752211525 nanos:804630351}" Jul 11 05:25:25.882851 containerd[1570]: time="2025-07-11T05:25:25.882814360Z" level=info msg="received exit event sandbox_id:\"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" exit_status:137 exited_at:{seconds:1752211525 nanos:800291363}" Jul 11 05:25:25.884741 containerd[1570]: time="2025-07-11T05:25:25.884712481Z" level=info msg="TearDown network for sandbox \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" successfully" Jul 11 05:25:25.884741 containerd[1570]: time="2025-07-11T05:25:25.884740062Z" level=info msg="StopPodSandbox for \"484647bdf4e1c2c0f276653c45115b23413ee8c185a288344a941ebdeebaa62b\" returns successfully" Jul 11 05:25:26.000546 kubelet[2689]: I0711 05:25:25.999978 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cilium-run\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.000546 kubelet[2689]: I0711 05:25:26.000028 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8phd\" (UniqueName: \"kubernetes.io/projected/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-kube-api-access-q8phd\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.000546 kubelet[2689]: I0711 05:25:26.000046 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-etc-cni-netd\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.000546 kubelet[2689]: I0711 05:25:26.000063 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b2eaae6-b54a-4e1b-857d-10bc190f4db7-cilium-config-path\") pod \"1b2eaae6-b54a-4e1b-857d-10bc190f4db7\" (UID: \"1b2eaae6-b54a-4e1b-857d-10bc190f4db7\") " Jul 11 05:25:26.000546 kubelet[2689]: I0711 05:25:26.000076 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cni-path\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.000546 kubelet[2689]: I0711 05:25:26.000088 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-lib-modules\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.001913 kubelet[2689]: I0711 05:25:26.000103 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-bpf-maps\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.001913 kubelet[2689]: I0711 05:25:26.000115 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-xtables-lock\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.001913 kubelet[2689]: I0711 05:25:26.000131 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-clustermesh-secrets\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.001913 kubelet[2689]: I0711 05:25:26.000145 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-hubble-tls\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.001913 kubelet[2689]: I0711 05:25:26.000162 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-host-proc-sys-net\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.001913 kubelet[2689]: I0711 05:25:26.000151 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 05:25:26.002708 kubelet[2689]: I0711 05:25:26.000178 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt8r2\" (UniqueName: \"kubernetes.io/projected/1b2eaae6-b54a-4e1b-857d-10bc190f4db7-kube-api-access-gt8r2\") pod \"1b2eaae6-b54a-4e1b-857d-10bc190f4db7\" (UID: \"1b2eaae6-b54a-4e1b-857d-10bc190f4db7\") " Jul 11 05:25:26.002708 kubelet[2689]: I0711 05:25:26.000185 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 05:25:26.002708 kubelet[2689]: I0711 05:25:26.000143 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 05:25:26.003260 kubelet[2689]: I0711 05:25:26.003241 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 05:25:26.003438 kubelet[2689]: I0711 05:25:26.003421 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 05:25:26.003722 kubelet[2689]: I0711 05:25:26.003493 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cni-path" (OuterVolumeSpecName: "cni-path") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 05:25:26.003722 kubelet[2689]: I0711 05:25:26.003511 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 05:25:26.003722 kubelet[2689]: I0711 05:25:26.000194 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cilium-config-path\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.003722 kubelet[2689]: I0711 05:25:26.003539 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-hostproc\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.003722 kubelet[2689]: I0711 05:25:26.003558 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-host-proc-sys-kernel\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.003866 kubelet[2689]: I0711 05:25:26.003573 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cilium-cgroup\") pod \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\" (UID: \"d9030cb4-58cf-4b84-b64a-69e9ba0e2a87\") " Jul 11 05:25:26.003866 kubelet[2689]: I0711 05:25:26.003603 2689 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.003866 kubelet[2689]: I0711 05:25:26.003612 2689 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.003866 kubelet[2689]: I0711 05:25:26.003623 2689 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.003866 kubelet[2689]: I0711 05:25:26.003643 2689 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.003866 kubelet[2689]: I0711 05:25:26.003650 2689 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.003866 kubelet[2689]: I0711 05:25:26.003658 2689 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.003866 kubelet[2689]: I0711 05:25:26.003666 2689 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.004034 kubelet[2689]: I0711 05:25:26.003618 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b2eaae6-b54a-4e1b-857d-10bc190f4db7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1b2eaae6-b54a-4e1b-857d-10bc190f4db7" (UID: "1b2eaae6-b54a-4e1b-857d-10bc190f4db7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 05:25:26.004034 kubelet[2689]: I0711 05:25:26.003649 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 05:25:26.004034 kubelet[2689]: I0711 05:25:26.003664 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-hostproc" (OuterVolumeSpecName: "hostproc") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 05:25:26.004034 kubelet[2689]: I0711 05:25:26.003706 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 05:25:26.004034 kubelet[2689]: I0711 05:25:26.003766 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 05:25:26.005121 kubelet[2689]: I0711 05:25:26.005086 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 05:25:26.005543 kubelet[2689]: I0711 05:25:26.005517 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-kube-api-access-q8phd" (OuterVolumeSpecName: "kube-api-access-q8phd") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "kube-api-access-q8phd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 05:25:26.007261 kubelet[2689]: I0711 05:25:26.007193 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b2eaae6-b54a-4e1b-857d-10bc190f4db7-kube-api-access-gt8r2" (OuterVolumeSpecName: "kube-api-access-gt8r2") pod "1b2eaae6-b54a-4e1b-857d-10bc190f4db7" (UID: "1b2eaae6-b54a-4e1b-857d-10bc190f4db7"). InnerVolumeSpecName "kube-api-access-gt8r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 05:25:26.008166 kubelet[2689]: I0711 05:25:26.008127 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" (UID: "d9030cb4-58cf-4b84-b64a-69e9ba0e2a87"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 05:25:26.104291 kubelet[2689]: I0711 05:25:26.104236 2689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt8r2\" (UniqueName: \"kubernetes.io/projected/1b2eaae6-b54a-4e1b-857d-10bc190f4db7-kube-api-access-gt8r2\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.104291 kubelet[2689]: I0711 05:25:26.104273 2689 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.104291 kubelet[2689]: I0711 05:25:26.104283 2689 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.104291 kubelet[2689]: I0711 05:25:26.104293 2689 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.104291 kubelet[2689]: I0711 05:25:26.104301 2689 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.104291 kubelet[2689]: I0711 05:25:26.104310 2689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8phd\" (UniqueName: \"kubernetes.io/projected/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-kube-api-access-q8phd\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.104586 kubelet[2689]: I0711 05:25:26.104319 2689 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b2eaae6-b54a-4e1b-857d-10bc190f4db7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.104586 kubelet[2689]: I0711 05:25:26.104327 2689 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.104586 kubelet[2689]: I0711 05:25:26.104334 2689 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 11 05:25:26.606998 systemd[1]: Removed slice kubepods-burstable-podd9030cb4_58cf_4b84_b64a_69e9ba0e2a87.slice - libcontainer container kubepods-burstable-podd9030cb4_58cf_4b84_b64a_69e9ba0e2a87.slice. Jul 11 05:25:26.607119 systemd[1]: kubepods-burstable-podd9030cb4_58cf_4b84_b64a_69e9ba0e2a87.slice: Consumed 7.061s CPU time, 127.6M memory peak, 276K read from disk, 13.3M written to disk. Jul 11 05:25:26.608593 systemd[1]: Removed slice kubepods-besteffort-pod1b2eaae6_b54a_4e1b_857d_10bc190f4db7.slice - libcontainer container kubepods-besteffort-pod1b2eaae6_b54a_4e1b_857d_10bc190f4db7.slice. Jul 11 05:25:26.758866 systemd[1]: var-lib-kubelet-pods-1b2eaae6\x2db54a\x2d4e1b\x2d857d\x2d10bc190f4db7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgt8r2.mount: Deactivated successfully. Jul 11 05:25:26.759014 systemd[1]: var-lib-kubelet-pods-d9030cb4\x2d58cf\x2d4b84\x2db64a\x2d69e9ba0e2a87-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq8phd.mount: Deactivated successfully. Jul 11 05:25:26.759110 systemd[1]: var-lib-kubelet-pods-d9030cb4\x2d58cf\x2d4b84\x2db64a\x2d69e9ba0e2a87-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 05:25:26.759211 systemd[1]: var-lib-kubelet-pods-d9030cb4\x2d58cf\x2d4b84\x2db64a\x2d69e9ba0e2a87-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 05:25:26.818731 kubelet[2689]: I0711 05:25:26.818537 2689 scope.go:117] "RemoveContainer" containerID="eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e" Jul 11 05:25:26.820692 containerd[1570]: time="2025-07-11T05:25:26.820640450Z" level=info msg="RemoveContainer for \"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\"" Jul 11 05:25:26.830034 containerd[1570]: time="2025-07-11T05:25:26.829989066Z" level=info msg="RemoveContainer for \"eeb510243d6677730c6a2d06c42ee9af1f6ae76d8db3b994bc9b63cdd8501f0e\" returns successfully" Jul 11 05:25:26.839183 kubelet[2689]: I0711 05:25:26.839157 2689 scope.go:117] "RemoveContainer" containerID="738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715" Jul 11 05:25:26.841434 containerd[1570]: time="2025-07-11T05:25:26.841230783Z" level=info msg="RemoveContainer for \"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\"" Jul 11 05:25:26.847684 containerd[1570]: time="2025-07-11T05:25:26.847645234Z" level=info msg="RemoveContainer for \"738bb7b78cb8917259fa1034797c5f487bdba29f01123e8f04a41d174f210715\" returns successfully" Jul 11 05:25:26.847943 kubelet[2689]: I0711 05:25:26.847892 2689 scope.go:117] "RemoveContainer" containerID="8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9" Jul 11 05:25:26.849614 containerd[1570]: time="2025-07-11T05:25:26.849516246Z" level=info msg="RemoveContainer for \"8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9\"" Jul 11 05:25:26.862683 containerd[1570]: time="2025-07-11T05:25:26.862527366Z" level=info msg="RemoveContainer for \"8754bf4f91923c14c58f5f5316d210b202ad080cf34abe688f4e29eb108015f9\" returns successfully" Jul 11 05:25:26.862840 kubelet[2689]: I0711 05:25:26.862796 2689 scope.go:117] "RemoveContainer" containerID="df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3" Jul 11 05:25:26.865472 containerd[1570]: time="2025-07-11T05:25:26.865440401Z" level=info msg="RemoveContainer for \"df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3\"" Jul 11 05:25:26.869756 containerd[1570]: time="2025-07-11T05:25:26.869713959Z" level=info msg="RemoveContainer for \"df36638937db709930c27c48c8ca45e8c118ce0f311fa8c143865b5cf37e66b3\" returns successfully" Jul 11 05:25:26.870002 kubelet[2689]: I0711 05:25:26.869960 2689 scope.go:117] "RemoveContainer" containerID="72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f" Jul 11 05:25:26.871439 containerd[1570]: time="2025-07-11T05:25:26.871411950Z" level=info msg="RemoveContainer for \"72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f\"" Jul 11 05:25:26.874995 containerd[1570]: time="2025-07-11T05:25:26.874955655Z" level=info msg="RemoveContainer for \"72a423c9b406ffb84a8ee246881bd1e7873b23cafbac01074867db558679097f\" returns successfully" Jul 11 05:25:26.875168 kubelet[2689]: I0711 05:25:26.875139 2689 scope.go:117] "RemoveContainer" containerID="e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03" Jul 11 05:25:26.876432 containerd[1570]: time="2025-07-11T05:25:26.876381340Z" level=info msg="RemoveContainer for \"e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03\"" Jul 11 05:25:26.879703 containerd[1570]: time="2025-07-11T05:25:26.879668559Z" level=info msg="RemoveContainer for \"e86875f6a09e780d9265156ad81f383771d739b6f8d9b5f45c57cece1bca9f03\" returns successfully" Jul 11 05:25:27.662280 sshd[4280]: Connection closed by 10.0.0.1 port 39026 Jul 11 05:25:27.662802 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:27.673198 systemd[1]: sshd@22-10.0.0.94:22-10.0.0.1:39026.service: Deactivated successfully. Jul 11 05:25:27.675009 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 05:25:27.675760 systemd-logind[1546]: Session 23 logged out. Waiting for processes to exit. Jul 11 05:25:27.678441 systemd[1]: Started sshd@23-10.0.0.94:22-10.0.0.1:39036.service - OpenSSH per-connection server daemon (10.0.0.1:39036). Jul 11 05:25:27.679263 systemd-logind[1546]: Removed session 23. Jul 11 05:25:27.736931 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 39036 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:27.738154 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:27.742566 systemd-logind[1546]: New session 24 of user core. Jul 11 05:25:27.752729 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 05:25:28.134929 sshd[4434]: Connection closed by 10.0.0.1 port 39036 Jul 11 05:25:28.136373 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:28.151541 systemd[1]: sshd@23-10.0.0.94:22-10.0.0.1:39036.service: Deactivated successfully. Jul 11 05:25:28.153930 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 05:25:28.156518 systemd-logind[1546]: Session 24 logged out. Waiting for processes to exit. Jul 11 05:25:28.158357 kubelet[2689]: E0711 05:25:28.156956 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" containerName="apply-sysctl-overwrites" Jul 11 05:25:28.158357 kubelet[2689]: E0711 05:25:28.156980 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" containerName="clean-cilium-state" Jul 11 05:25:28.158357 kubelet[2689]: E0711 05:25:28.156989 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" containerName="mount-cgroup" Jul 11 05:25:28.158357 kubelet[2689]: E0711 05:25:28.156997 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" containerName="mount-bpf-fs" Jul 11 05:25:28.158357 kubelet[2689]: E0711 05:25:28.157003 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b2eaae6-b54a-4e1b-857d-10bc190f4db7" containerName="cilium-operator" Jul 11 05:25:28.158357 kubelet[2689]: E0711 05:25:28.157011 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" containerName="cilium-agent" Jul 11 05:25:28.158357 kubelet[2689]: I0711 05:25:28.157037 2689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" containerName="cilium-agent" Jul 11 05:25:28.158357 kubelet[2689]: I0711 05:25:28.157045 2689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2eaae6-b54a-4e1b-857d-10bc190f4db7" containerName="cilium-operator" Jul 11 05:25:28.163649 systemd[1]: Started sshd@24-10.0.0.94:22-10.0.0.1:39048.service - OpenSSH per-connection server daemon (10.0.0.1:39048). Jul 11 05:25:28.168292 systemd-logind[1546]: Removed session 24. Jul 11 05:25:28.180900 systemd[1]: Created slice kubepods-burstable-poddfb7b87d_816d_4928_a152_19707f3a6696.slice - libcontainer container kubepods-burstable-poddfb7b87d_816d_4928_a152_19707f3a6696.slice. Jul 11 05:25:28.217706 kubelet[2689]: I0711 05:25:28.217669 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfb7b87d-816d-4928-a152-19707f3a6696-xtables-lock\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.217952 kubelet[2689]: I0711 05:25:28.217905 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dfb7b87d-816d-4928-a152-19707f3a6696-host-proc-sys-kernel\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.217952 kubelet[2689]: I0711 05:25:28.217926 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dfb7b87d-816d-4928-a152-19707f3a6696-bpf-maps\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.218094 kubelet[2689]: I0711 05:25:28.218052 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dfb7b87d-816d-4928-a152-19707f3a6696-hostproc\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.218094 kubelet[2689]: I0711 05:25:28.218069 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfb7b87d-816d-4928-a152-19707f3a6696-etc-cni-netd\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.218200 kubelet[2689]: I0711 05:25:28.218187 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dfb7b87d-816d-4928-a152-19707f3a6696-cilium-config-path\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.218343 kubelet[2689]: I0711 05:25:28.218268 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dfb7b87d-816d-4928-a152-19707f3a6696-cni-path\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.218343 kubelet[2689]: I0711 05:25:28.218283 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dfb7b87d-816d-4928-a152-19707f3a6696-cilium-run\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.218343 kubelet[2689]: I0711 05:25:28.218296 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dfb7b87d-816d-4928-a152-19707f3a6696-cilium-cgroup\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.218343 kubelet[2689]: I0711 05:25:28.218308 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfb7b87d-816d-4928-a152-19707f3a6696-lib-modules\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.218577 kubelet[2689]: I0711 05:25:28.218517 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dfb7b87d-816d-4928-a152-19707f3a6696-clustermesh-secrets\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.218577 kubelet[2689]: I0711 05:25:28.218545 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dfb7b87d-816d-4928-a152-19707f3a6696-cilium-ipsec-secrets\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.218577 kubelet[2689]: I0711 05:25:28.218558 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dfb7b87d-816d-4928-a152-19707f3a6696-host-proc-sys-net\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.218746 kubelet[2689]: I0711 05:25:28.218688 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq8pm\" (UniqueName: \"kubernetes.io/projected/dfb7b87d-816d-4928-a152-19707f3a6696-kube-api-access-rq8pm\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.218746 kubelet[2689]: I0711 05:25:28.218708 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dfb7b87d-816d-4928-a152-19707f3a6696-hubble-tls\") pod \"cilium-bbw84\" (UID: \"dfb7b87d-816d-4928-a152-19707f3a6696\") " pod="kube-system/cilium-bbw84" Jul 11 05:25:28.241085 sshd[4446]: Accepted publickey for core from 10.0.0.1 port 39048 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:28.242791 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:28.247234 systemd-logind[1546]: New session 25 of user core. Jul 11 05:25:28.254523 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 05:25:28.304852 sshd[4449]: Connection closed by 10.0.0.1 port 39048 Jul 11 05:25:28.305297 sshd-session[4446]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:28.319174 systemd[1]: sshd@24-10.0.0.94:22-10.0.0.1:39048.service: Deactivated successfully. Jul 11 05:25:28.321502 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 05:25:28.324584 systemd-logind[1546]: Session 25 logged out. Waiting for processes to exit. Jul 11 05:25:28.337650 systemd[1]: Started sshd@25-10.0.0.94:22-10.0.0.1:39050.service - OpenSSH per-connection server daemon (10.0.0.1:39050). Jul 11 05:25:28.338459 systemd-logind[1546]: Removed session 25. Jul 11 05:25:28.400226 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 39050 ssh2: RSA SHA256:UJKjSuQKs73ENqxtXNcdIy1aP5u3CenlaTOdjvk0Nvk Jul 11 05:25:28.401803 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:28.406236 systemd-logind[1546]: New session 26 of user core. Jul 11 05:25:28.416529 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 05:25:28.487307 kubelet[2689]: E0711 05:25:28.487259 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:28.488011 containerd[1570]: time="2025-07-11T05:25:28.487964324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bbw84,Uid:dfb7b87d-816d-4928-a152-19707f3a6696,Namespace:kube-system,Attempt:0,}" Jul 11 05:25:28.503806 containerd[1570]: time="2025-07-11T05:25:28.503764979Z" level=info msg="connecting to shim f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841" address="unix:///run/containerd/s/73aad234576622ddf7247f0e3e337017f60e14d14eb5fc2d33e4346c4f37d9c2" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:25:28.527573 systemd[1]: Started cri-containerd-f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841.scope - libcontainer container f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841. Jul 11 05:25:28.553009 containerd[1570]: time="2025-07-11T05:25:28.552961607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bbw84,Uid:dfb7b87d-816d-4928-a152-19707f3a6696,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841\"" Jul 11 05:25:28.553935 kubelet[2689]: E0711 05:25:28.553910 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:28.556118 containerd[1570]: time="2025-07-11T05:25:28.556087310Z" level=info msg="CreateContainer within sandbox \"f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 05:25:28.563292 containerd[1570]: time="2025-07-11T05:25:28.563255029Z" level=info msg="Container d481ec98544402a9cf598d283cdfba15c7303430ffe236fd866356cc37cf07cf: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:25:28.570972 containerd[1570]: time="2025-07-11T05:25:28.570937091Z" level=info msg="CreateContainer within sandbox \"f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d481ec98544402a9cf598d283cdfba15c7303430ffe236fd866356cc37cf07cf\"" Jul 11 05:25:28.571448 containerd[1570]: time="2025-07-11T05:25:28.571427090Z" level=info msg="StartContainer for \"d481ec98544402a9cf598d283cdfba15c7303430ffe236fd866356cc37cf07cf\"" Jul 11 05:25:28.572236 containerd[1570]: time="2025-07-11T05:25:28.572210324Z" level=info msg="connecting to shim d481ec98544402a9cf598d283cdfba15c7303430ffe236fd866356cc37cf07cf" address="unix:///run/containerd/s/73aad234576622ddf7247f0e3e337017f60e14d14eb5fc2d33e4346c4f37d9c2" protocol=ttrpc version=3 Jul 11 05:25:28.592557 systemd[1]: Started cri-containerd-d481ec98544402a9cf598d283cdfba15c7303430ffe236fd866356cc37cf07cf.scope - libcontainer container d481ec98544402a9cf598d283cdfba15c7303430ffe236fd866356cc37cf07cf. Jul 11 05:25:28.597191 kubelet[2689]: I0711 05:25:28.597153 2689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b2eaae6-b54a-4e1b-857d-10bc190f4db7" path="/var/lib/kubelet/pods/1b2eaae6-b54a-4e1b-857d-10bc190f4db7/volumes" Jul 11 05:25:28.597760 kubelet[2689]: I0711 05:25:28.597737 2689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9030cb4-58cf-4b84-b64a-69e9ba0e2a87" path="/var/lib/kubelet/pods/d9030cb4-58cf-4b84-b64a-69e9ba0e2a87/volumes" Jul 11 05:25:28.622362 containerd[1570]: time="2025-07-11T05:25:28.622312133Z" level=info msg="StartContainer for \"d481ec98544402a9cf598d283cdfba15c7303430ffe236fd866356cc37cf07cf\" returns successfully" Jul 11 05:25:28.630295 systemd[1]: cri-containerd-d481ec98544402a9cf598d283cdfba15c7303430ffe236fd866356cc37cf07cf.scope: Deactivated successfully. Jul 11 05:25:28.631560 containerd[1570]: time="2025-07-11T05:25:28.631529366Z" level=info msg="received exit event container_id:\"d481ec98544402a9cf598d283cdfba15c7303430ffe236fd866356cc37cf07cf\" id:\"d481ec98544402a9cf598d283cdfba15c7303430ffe236fd866356cc37cf07cf\" pid:4528 exited_at:{seconds:1752211528 nanos:631111099}" Jul 11 05:25:28.631610 containerd[1570]: time="2025-07-11T05:25:28.631589748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d481ec98544402a9cf598d283cdfba15c7303430ffe236fd866356cc37cf07cf\" id:\"d481ec98544402a9cf598d283cdfba15c7303430ffe236fd866356cc37cf07cf\" pid:4528 exited_at:{seconds:1752211528 nanos:631111099}" Jul 11 05:25:28.847302 kubelet[2689]: E0711 05:25:28.846619 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:28.848519 containerd[1570]: time="2025-07-11T05:25:28.848289732Z" level=info msg="CreateContainer within sandbox \"f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 05:25:28.855035 containerd[1570]: time="2025-07-11T05:25:28.854998858Z" level=info msg="Container 5c2b17767478d2a27114ddddcf255e175417484f0506216fa28cf559061fd3ff: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:25:28.861354 containerd[1570]: time="2025-07-11T05:25:28.861305197Z" level=info msg="CreateContainer within sandbox \"f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c2b17767478d2a27114ddddcf255e175417484f0506216fa28cf559061fd3ff\"" Jul 11 05:25:28.861805 containerd[1570]: time="2025-07-11T05:25:28.861778135Z" level=info msg="StartContainer for \"5c2b17767478d2a27114ddddcf255e175417484f0506216fa28cf559061fd3ff\"" Jul 11 05:25:28.866733 containerd[1570]: time="2025-07-11T05:25:28.866702827Z" level=info msg="connecting to shim 5c2b17767478d2a27114ddddcf255e175417484f0506216fa28cf559061fd3ff" address="unix:///run/containerd/s/73aad234576622ddf7247f0e3e337017f60e14d14eb5fc2d33e4346c4f37d9c2" protocol=ttrpc version=3 Jul 11 05:25:28.890537 systemd[1]: Started cri-containerd-5c2b17767478d2a27114ddddcf255e175417484f0506216fa28cf559061fd3ff.scope - libcontainer container 5c2b17767478d2a27114ddddcf255e175417484f0506216fa28cf559061fd3ff. Jul 11 05:25:28.916371 containerd[1570]: time="2025-07-11T05:25:28.916330836Z" level=info msg="StartContainer for \"5c2b17767478d2a27114ddddcf255e175417484f0506216fa28cf559061fd3ff\" returns successfully" Jul 11 05:25:28.921686 systemd[1]: cri-containerd-5c2b17767478d2a27114ddddcf255e175417484f0506216fa28cf559061fd3ff.scope: Deactivated successfully. Jul 11 05:25:28.922275 containerd[1570]: time="2025-07-11T05:25:28.922087945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c2b17767478d2a27114ddddcf255e175417484f0506216fa28cf559061fd3ff\" id:\"5c2b17767478d2a27114ddddcf255e175417484f0506216fa28cf559061fd3ff\" pid:4575 exited_at:{seconds:1752211528 nanos:921816972}" Jul 11 05:25:28.922275 containerd[1570]: time="2025-07-11T05:25:28.922157795Z" level=info msg="received exit event container_id:\"5c2b17767478d2a27114ddddcf255e175417484f0506216fa28cf559061fd3ff\" id:\"5c2b17767478d2a27114ddddcf255e175417484f0506216fa28cf559061fd3ff\" pid:4575 exited_at:{seconds:1752211528 nanos:921816972}" Jul 11 05:25:29.850569 kubelet[2689]: E0711 05:25:29.850519 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:29.852786 containerd[1570]: time="2025-07-11T05:25:29.852744079Z" level=info msg="CreateContainer within sandbox \"f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 05:25:29.862816 containerd[1570]: time="2025-07-11T05:25:29.862766903Z" level=info msg="Container 84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:25:29.873477 containerd[1570]: time="2025-07-11T05:25:29.873417033Z" level=info msg="CreateContainer within sandbox \"f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7\"" Jul 11 05:25:29.873931 containerd[1570]: time="2025-07-11T05:25:29.873898918Z" level=info msg="StartContainer for \"84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7\"" Jul 11 05:25:29.875171 containerd[1570]: time="2025-07-11T05:25:29.875143750Z" level=info msg="connecting to shim 84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7" address="unix:///run/containerd/s/73aad234576622ddf7247f0e3e337017f60e14d14eb5fc2d33e4346c4f37d9c2" protocol=ttrpc version=3 Jul 11 05:25:29.900539 systemd[1]: Started cri-containerd-84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7.scope - libcontainer container 84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7. Jul 11 05:25:29.938295 systemd[1]: cri-containerd-84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7.scope: Deactivated successfully. Jul 11 05:25:29.938845 containerd[1570]: time="2025-07-11T05:25:29.938809337Z" level=info msg="StartContainer for \"84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7\" returns successfully" Jul 11 05:25:29.939574 containerd[1570]: time="2025-07-11T05:25:29.939531198Z" level=info msg="received exit event container_id:\"84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7\" id:\"84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7\" pid:4618 exited_at:{seconds:1752211529 nanos:939067958}" Jul 11 05:25:29.940045 containerd[1570]: time="2025-07-11T05:25:29.940001532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7\" id:\"84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7\" pid:4618 exited_at:{seconds:1752211529 nanos:939067958}" Jul 11 05:25:29.959366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84abd5e265dc2d7e2fe74b3564622a31e8080f0a7d343e705824f0b9d5f83aa7-rootfs.mount: Deactivated successfully. Jul 11 05:25:30.645232 kubelet[2689]: E0711 05:25:30.645172 2689 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 05:25:30.854885 kubelet[2689]: E0711 05:25:30.854840 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:30.856658 containerd[1570]: time="2025-07-11T05:25:30.856606423Z" level=info msg="CreateContainer within sandbox \"f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 05:25:30.864655 containerd[1570]: time="2025-07-11T05:25:30.864610423Z" level=info msg="Container 2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:25:30.873694 containerd[1570]: time="2025-07-11T05:25:30.873651179Z" level=info msg="CreateContainer within sandbox \"f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085\"" Jul 11 05:25:30.874161 containerd[1570]: time="2025-07-11T05:25:30.874125479Z" level=info msg="StartContainer for \"2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085\"" Jul 11 05:25:30.875089 containerd[1570]: time="2025-07-11T05:25:30.875061629Z" level=info msg="connecting to shim 2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085" address="unix:///run/containerd/s/73aad234576622ddf7247f0e3e337017f60e14d14eb5fc2d33e4346c4f37d9c2" protocol=ttrpc version=3 Jul 11 05:25:30.899543 systemd[1]: Started cri-containerd-2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085.scope - libcontainer container 2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085. Jul 11 05:25:30.925942 systemd[1]: cri-containerd-2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085.scope: Deactivated successfully. Jul 11 05:25:30.926484 containerd[1570]: time="2025-07-11T05:25:30.926442291Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085\" id:\"2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085\" pid:4657 exited_at:{seconds:1752211530 nanos:926129590}" Jul 11 05:25:30.927790 containerd[1570]: time="2025-07-11T05:25:30.927745932Z" level=info msg="received exit event container_id:\"2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085\" id:\"2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085\" pid:4657 exited_at:{seconds:1752211530 nanos:926129590}" Jul 11 05:25:30.935137 containerd[1570]: time="2025-07-11T05:25:30.935107719Z" level=info msg="StartContainer for \"2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085\" returns successfully" Jul 11 05:25:30.948365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a2bde69b7669bca893635b1f6c679cdd006a74bc8f7309514d602b534a4d085-rootfs.mount: Deactivated successfully. Jul 11 05:25:31.860065 kubelet[2689]: E0711 05:25:31.860031 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:31.866327 containerd[1570]: time="2025-07-11T05:25:31.865218202Z" level=info msg="CreateContainer within sandbox \"f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 05:25:31.887248 containerd[1570]: time="2025-07-11T05:25:31.887181936Z" level=info msg="Container aaeef75342cecf9053575b0ce2c39487e92884adfa06383627b845bd33d306dc: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:25:31.897242 containerd[1570]: time="2025-07-11T05:25:31.897182280Z" level=info msg="CreateContainer within sandbox \"f0374558d322e2c4d7008d22faeb5975411b3a8f2cb61433000cb556929c2841\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aaeef75342cecf9053575b0ce2c39487e92884adfa06383627b845bd33d306dc\"" Jul 11 05:25:31.897899 containerd[1570]: time="2025-07-11T05:25:31.897857515Z" level=info msg="StartContainer for \"aaeef75342cecf9053575b0ce2c39487e92884adfa06383627b845bd33d306dc\"" Jul 11 05:25:31.899058 containerd[1570]: time="2025-07-11T05:25:31.899019614Z" level=info msg="connecting to shim aaeef75342cecf9053575b0ce2c39487e92884adfa06383627b845bd33d306dc" address="unix:///run/containerd/s/73aad234576622ddf7247f0e3e337017f60e14d14eb5fc2d33e4346c4f37d9c2" protocol=ttrpc version=3 Jul 11 05:25:31.925610 systemd[1]: Started cri-containerd-aaeef75342cecf9053575b0ce2c39487e92884adfa06383627b845bd33d306dc.scope - libcontainer container aaeef75342cecf9053575b0ce2c39487e92884adfa06383627b845bd33d306dc. Jul 11 05:25:31.964199 containerd[1570]: time="2025-07-11T05:25:31.964157740Z" level=info msg="StartContainer for \"aaeef75342cecf9053575b0ce2c39487e92884adfa06383627b845bd33d306dc\" returns successfully" Jul 11 05:25:31.975191 kubelet[2689]: I0711 05:25:31.975132 2689 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T05:25:31Z","lastTransitionTime":"2025-07-11T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 11 05:25:32.035007 containerd[1570]: time="2025-07-11T05:25:32.034945503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aaeef75342cecf9053575b0ce2c39487e92884adfa06383627b845bd33d306dc\" id:\"203e794d5bc378b34707653dc932c2c1c2f9ef503c509daa134031d8cc540afc\" pid:4724 exited_at:{seconds:1752211532 nanos:34501778}" Jul 11 05:25:32.387438 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 11 05:25:32.866211 kubelet[2689]: E0711 05:25:32.866163 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:32.879532 kubelet[2689]: I0711 05:25:32.879405 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bbw84" podStartSLOduration=4.879365757 podStartE2EDuration="4.879365757s" podCreationTimestamp="2025-07-11 05:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:25:32.879042746 +0000 UTC m=+82.370222414" watchObservedRunningTime="2025-07-11 05:25:32.879365757 +0000 UTC m=+82.370545415" Jul 11 05:25:34.488819 kubelet[2689]: E0711 05:25:34.488756 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:34.714687 containerd[1570]: time="2025-07-11T05:25:34.714593564Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aaeef75342cecf9053575b0ce2c39487e92884adfa06383627b845bd33d306dc\" id:\"601d39260019e651a6635927d7f1cf26aec3350f14db6591c8e32bef1d8fbc6b\" pid:5050 exit_status:1 exited_at:{seconds:1752211534 nanos:713310987}" Jul 11 05:25:35.395666 systemd-networkd[1473]: lxc_health: Link UP Jul 11 05:25:35.396541 systemd-networkd[1473]: lxc_health: Gained carrier Jul 11 05:25:36.489983 kubelet[2689]: E0711 05:25:36.489945 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:36.832203 containerd[1570]: time="2025-07-11T05:25:36.832066597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aaeef75342cecf9053575b0ce2c39487e92884adfa06383627b845bd33d306dc\" id:\"44e004ca60988845bc85c1fd754b46ea7fb8846296671ec25d1281a3a3da2a53\" pid:5268 exited_at:{seconds:1752211536 nanos:831463976}" Jul 11 05:25:36.834258 kubelet[2689]: E0711 05:25:36.834205 2689 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57806->127.0.0.1:41765: write tcp 127.0.0.1:57806->127.0.0.1:41765: write: broken pipe Jul 11 05:25:36.876893 kubelet[2689]: E0711 05:25:36.876854 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:37.447655 systemd-networkd[1473]: lxc_health: Gained IPv6LL Jul 11 05:25:37.878101 kubelet[2689]: E0711 05:25:37.878043 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:38.594120 kubelet[2689]: E0711 05:25:38.594081 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 05:25:38.920000 containerd[1570]: time="2025-07-11T05:25:38.919892864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aaeef75342cecf9053575b0ce2c39487e92884adfa06383627b845bd33d306dc\" id:\"52495b8166833b3a0fd6e8159a893836063051f4936c0a3d0b9f77bbeb7cd94f\" pid:5301 exited_at:{seconds:1752211538 nanos:919488751}" Jul 11 05:25:41.079458 containerd[1570]: time="2025-07-11T05:25:41.079411997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aaeef75342cecf9053575b0ce2c39487e92884adfa06383627b845bd33d306dc\" id:\"a0fb3a004add59e84158554348e01513eadd9aa3cce827f18d7ee731fe564784\" pid:5326 exited_at:{seconds:1752211541 nanos:78813001}" Jul 11 05:25:41.084802 sshd[4464]: Connection closed by 10.0.0.1 port 39050 Jul 11 05:25:41.085257 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:41.089183 systemd[1]: sshd@25-10.0.0.94:22-10.0.0.1:39050.service: Deactivated successfully. Jul 11 05:25:41.090900 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 05:25:41.091642 systemd-logind[1546]: Session 26 logged out. Waiting for processes to exit. Jul 11 05:25:41.092666 systemd-logind[1546]: Removed session 26. Jul 11 05:25:42.593188 kubelet[2689]: E0711 05:25:42.593126 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"